NameLast modifiedSizeLicense

Parent Directory Parent Directory
other Image 26-Feb-2018 23:12 11.2M open
other fvp-base-gicv2-psci.dtb 13-Jan-2018 03:20 8.9K open
other fvp-base-gicv2legacy-psci.dtb 13-Jan-2018 03:20 8.9K open
other fvp-base-gicv3-psci.dtb 13-Jan-2018 03:20 9.4K open
other fvp-foundation-gicv2-psci.dtb 13-Jan-2018 03:20 6.6K open
other fvp-foundation-gicv2legacy-psci.dtb 13-Jan-2018 03:20 6.6K open
other fvp-foundation-gicv3-psci.dtb 13-Jan-2018 03:20 7.1K open
application/octet-stream fvp_bl1.bin 13-Jan-2018 03:20 16.0K open
application/octet-stream fvp_fip.bin 13-Jan-2018 03:20 2.5M open
text hwpack_linaro-vexpress64-rtsm_20150522-720_arm64_supported.manifest.txt 13-Jan-2018 03:20 446 open
application/x-tar hwpack_linaro-vexpress64-rtsm_20150522-720_arm64_supported.tar.gz 26-Feb-2018 23:12 31.0M open
other img-foundation.axf 26-Feb-2018 23:12 11.4M open
other img.axf 26-Feb-2018 23:12 11.4M open
other juno.dtb 13-Jan-2018 03:20 10.2K open
application/octet-stream juno_bl1.bin 13-Jan-2018 03:20 12.0K open
application/octet-stream juno_fip.bin 13-Jan-2018 03:20 1.0M open
other linaro-image-lamp-genericarmv8-20150522-752.manifest 13-Jan-2018 03:20 149.9K open
other linaro-image-lamp-genericarmv8-20150522-752.rootfs.manifest 13-Jan-2018 03:20 55.7K open
application/x-tar linaro-image-lamp-genericarmv8-20150522-752.rootfs.tar.gz 26-Feb-2018 23:12 437.6M open
other linaro-image-minimal-genericarmv8-20150522-752.manifest 13-Jan-2018 03:20 8.5K open
other linaro-image-minimal-genericarmv8-20150522-752.rootfs.manifest 13-Jan-2018 03:20 2.8K open
application/x-tar linaro-image-minimal-genericarmv8-20150522-752.rootfs.tar.gz 26-Feb-2018 23:12 16.2M open
other startup.nsh 13-Jan-2018 03:20 163 open
application/octet-stream uefi_juno.bin 13-Jan-2018 03:20 960.0K open
text vexpress64-openembedded_lamp-armv8-gcc-4.9_20150522-720.html 13-Jan-2018 03:20 3.5K open
other vexpress64-openembedded_lamp-armv8-gcc-4.9_20150522-720.img.gz 26-Feb-2018 23:12 508.6M open
other vexpress64-openembedded_lamp-armv8-gcc-4.9_20150522-720.img.gz.zsync 26-Feb-2018 23:12 19.5M open
text vexpress64-openembedded_minimal-armv8-gcc-4.9_20150522-720.html 13-Jan-2018 03:20 3.5K open
other vexpress64-openembedded_minimal-armv8-gcc-4.9_20150522-720.img.gz 26-Feb-2018 23:12 83.9M open
other vexpress64-openembedded_minimal-armv8-gcc-4.9_20150522-720.img.gz.zsync 26-Feb-2018 23:12 19.5M open

AArch64 OpenEmbedded ARM Fast Models Release

The AArch64 Open Embedded Engineering Build for ARM Fast Models for ARMv8 is produced, validated and released by Linaro, based on the latest AArch64 open source software from Tianocore EDK2 (UEFI), the Linux kernel, ARM Trusted Firmware and OpenEmbedded. It is produced to enable software development for AArch64 prior to hardware availability, and facilitates development of software that is independent of the specific CPU implementation. This build focuses on availability of new features in order to provide a basis for ongoing and dependent software development.

Linaro releases monthly binary images of this engineering build. This release includes Linaro OpenEmbedded images for Versatile Express, Base and Foundation Fixed Virtual Platform (FVP) models from ARM. Sources are also made available so you can build your own images.

About the AArch64 OpenEmbedded Engineering Build

This release has been tested on the Base FVPs from ARM (since September 2013), in addition to the updated Foundation FVP (since November 2013) and Versatile Express FVP.

The ‘Base’ platform is an evolution from the Versatile Express (VE) platform that is better able to support new system IP, larger memory maps and multiple CPU clusters. The changes in the Base platform memory map require that software or device trees which specify peripheral addresses may need to be modified for the Base FVPs. Device trees for these FVPs are included in this release.

This build has been tested to work on the following FVPs:

  • Foundation_v8
  • FVP_VE_AEMv8A (previously called RTSM_VE_AEMv8A before v5.0)
  • FVP_Base_AEMv8A-AEMv8A
  • FVP_Base_Cortex-A57×4-A53×4
  • FVP_Base_Cortex-A57×1-A53×1

The Foundation_v8 FVP is free to use (download from ARM, while the others are licensed from ARM. More information on these specific FVPs is included with this release documentation.

The Base and Foundation FVPs use the following software for boot and runtime firmware services in this engineering build:

  • ARM Trusted Firmware provides a reference implementation of secure world software for ARMv8-A, including Exception Level 3 (EL3) software
  • Tianocore EDK2 which provides a UEFI based boot environment for the normal world operating system or hypervisor (in this case, Linux)
    The VE FVP continues to be booted using the AArch64 Linux boot-wrapper.

The same kernel image is used for all of these models, though the FVP_VE_AEMv8A image is encapsulated by the kernel boot-wrapper as in previous releases.

Where To Find More Information

More information on Linaro can be found on our website.

More information on ARM FVPs can be found on ARM’s website:

More information on ARM Trusted Firmware and Platform Design Documents (PDDs) can be found on the project’s GitHub repository:

More information on building UEFI for AArch64 can be found on Linaro’s wiki:

Feedback and Support

Relating to the ARM Fixed Virtual Platforms:

  • For technical support on the Foundation_v8 FVP use the ARM support forums,
  • For technical support on the Base and VE FVPs contact ARM Fast Models Support,
  • To provide feedback to ARM relating to any of the FVPs, please refer to the respective product documentation

To provide feedback or ask questions about the ARM Trusted Firmware, please create a new Issue at GitHub.

Subscribe to the important Linaro mailing lists and join our IRC channels to stay on top of Linaro development.

Changes in this release

  • OpenDataPlane recipe added
  • Linaro PowerDebug recipe added
  • ACPICA updated to 20131115
  • Add initscript for gator

Known Issues

Linaro OpenEmbedded releases are made up of the following components.

  • img.gz
pre-built images for minimal, LAMP and LEG root filesystems
hwpack_*.tar.gz a single hardware pack for the Foundation, Versatile Express [1] and Base platform models
linaro-image-*.rootfs.tar.gz a choice of Root file system (RootFS) images
Image kernel used by UEFI
foundation.axf kernel binary wrapped for the Foundation model
img.axf kernel binary wrapped for the Versatile Express model
*_bl1.bin ARM Trused Firmware BL1 binaries
*_fip.bin ARM Trused Firmware Firmware Image Package (FIP) binaries
  • dtb files
Device Tree Binaries

Other files such as *.manifest, *.txt and *.html provide information such as package contents or MD5SUMs about the files they share a common filename with.

1 Linaro does not provide support for running the Versatile Express models.

Using pre-built image


  • Ubuntu 12.04 64 bit or newer on your desktop PC (
  • Foundation, Versatile Express, Base platform fast model (Linaro ARMv8 Engineering) or QEMU 2.1+
  • All artifacts for this release downloaded from the above list.
  • 14.04 linaro-image-tools or later

Installation Steps

  • Unzip the downloaded pre-built image
gunzip vexpress64-openembedded_*-armv8*.img.gz
  • Skip to the section below showing how to Boot the image.
    Replace sd.img with the filename of the image unzipped above.

Build Your Own Image

Installing Linaro Image Tools

There are multiple ways you can get the latest Linaro Image Tools:

  • Method 1: Install them from the Linaro Image Tools PPA
sudo add-apt-repository ppa:linaro-maintainers/tools
sudo apt-get update
sudo apt-get install linaro-image-tools
  • Method 2: Install from release tarball
cd <installation directory>
tar zxvf linaro-image-tools-2014.04.tar.gz
  • Method 3: Building from the GIT repository
cd <working direcory>
git clone git://

Creating a disk image

  • Download the hardware pack (hwpack-*.tar.gz) from the above list
  • Download the rootfs (linaro-*.tar.gz) of your choice from the above list, LAMP is usually a good selection for a more fully featured disk image
  • Create the image
linaro-media-create --dev fastmodel --output-directory fastmodel --image_size 2000M --hwpack <hwpack filename> --binary <rootfs filename>
cd fastmodel

Boot the image

Using the FVP Base AEMv8 model

ln -sf fvp-base-gicv2-psci.dtb fdt.dtb
<path to model installation>/models/Linux64_GCC-4.1/FVP_Base_AEMv8A-AEMv8A \
-C pctl.startup= \
-C bp.secure_memory=0 \
-C cluster0.NUM_CORES=4 \
-C cluster1.NUM_CORES=4 \
-C cache_state_modelled=1 \
-C bp.pl011_uart0.untimed_fifos=1 \
-C bp.secureflashloader.fname=fvp_bl1.bin \
-C bp.flashloader0.fname=fvp_fip.bin \
-C bp.virtioblockdevice.image_path=sd.img

Using the FVP Base Cortex model

To boot the A57×1 + A53×1 model:

ln -sf fvp-base-gicv2-psci.dtb fdt.dtb
<path to model installation>/models/Linux64_GCC-4.1/FVP_Base_Cortex-A57x1-A53x1 \
-C pctl.startup= \
-C bp.secure_memory=0 \
-C cache_state_modelled=1 \
-C bp.pl011_uart0.untimed_fifos=1 \
-C bp.secureflashloader.fname=fvp_bl1.bin \
-C bp.flashloader0.fname=fvp_fip.bin \
-C bp.virtioblockdevice.image_path=sd.img

To boot the A57×4 + A53×4 model, use the same command, only specify a different model binary:

ln -sf fvp-base-gicv2-psci.dtb fdt.dtb
<path to model installation>/models/Linux64_GCC-4.1/FVP_Base_Cortex-A57x4-A53x4 \
-C pctl.startup= \
-C bp.secure_memory=0 \
-C cache_state_modelled=1 \
-C bp.pl011_uart0.untimed_fifos=1 \
-C bp.secureflashloader.fname=fvp_bl1.bin \
-C bp.flashloader0.fname=fvp_fip.bin \
-C bp.virtioblockdevice.image_path=sd.img

Using Ethernet networking with the FVP Base Platform Models

To enable networking in the Base FVP models, you should install a network TAP on your local machine. See the Fast Models User Guide for more information on setting up a TAP for use with the models.

Then, with the network TAP enabled, run with model as above, but with these additional parameters

-C bp.hostbridge.interfaceName=ARM$USER \
-C bp.smsc_91c111.enabled=true \
-C bp.smsc_91c111.mac_address=<MAC address, eg, 00:11:22:33:44:55>

Booting the image on the Foundation Model

The latest version of the Foundation Model (version 0.8.5206, as of November 2013) is compatible with the ARM Trusted Firmware and UEFI. To launch the Foundation Model, use the following command:

ln -sf fvp-foundation-gicv2-psci.dtb fdt.dtb
<path to model installation>/Foundation_v8 \
--cores=4 \
--no-secure-memory \
--visualization \
--gicv3 \
--data=fvp_bl1.bin@0x0 \
--data=fvp_fip.bin@0x8000000 \

note: it is intentional that we suggest using the GICv2 DTB file, whilst specifying GICv3 on the model command line. In this mode, you are using the GICv3 controller in GICv2 compatibility mode, which is the default recommended by ARM.

Booting the image on the Foundation Model using the AXF file

If you are using an older version of Foundation Model (i.e. older than version 0.8.5206) and wish to use a pre-built image, you can simply unzip the downloaded pre-built image and the img-foundation.axf file and run the following command:

<path to model installation>/Foundation_v8 --image img-foundation.axf \
  --block-device vexpress64-openembedded_IMAGENAME-armv8_IMAGEDATE-XYZ.img

To use the image you built with linaro-media-create in the steps above, run this command:

<path to model installation>/Foundation_v8 --image img-foundation.axf \
  --block-device sd.img

Using Ethernet networking on the Foundation Models

The Foundation Models can be configured to use either NAT or Bridged mode.

For the basic NAT confgiuration, add the following option to the command used to launch the model:

--network nat

For Bridged networking, you will need to set up a network TAP as per the FastModels user guide. Then, the following options may work:

--network bridged --network-bridge=ARM$USER

Details of the Foundation Model’s command line options can be found in the Foundation model documentation on ARM’s website.

Using QEMU

QEMU supports ARMv8 since 2.1 release. To boot up Linaro image with QEMU, you will need the kernel (filename Image) and selected prebuilt image (vexpress64-openembedded_IMAGENAME-armv8_IMAGEDATE-XYZ.img.gz). Uncompress the downloaded image and run:

qemu-system-aarch64 -m 1024 -cpu cortex-a57 -nographic -machine virt \
  -kernel Image -append 'root=/dev/vda2 rw rootwait mem=1024M console=ttyAMA0,38400n8' \
  -netdev user,id=user0 -device virtio-net-device,netdev=user0  -device virtio-blk-device,drive=disk \
  -drive if=none,id=disk,file=vexpress64-openembedded_IMAGENAME-armv8_IMAGEDATE-XYZ.img

See QEMU manual for details on installing and using Qemu.

Building locally

Initial setup

mkdir openembedded
cd openembedded
git clone git://
cd jenkins-setup
git checkout release-${YY}.${MM}
cd ..
sudo bash jenkins-setup/
bash jenkins-setup/

This will clone all required repositories and does initial setup.

Do a build

cd openembedded-core
. oe-init-build-env ../build
bitbake bash

Of course you can use other targets instead of “bash”. Use “linaro-image-minimal” to build a simple rootfs.

Usable build targets

  • linaro-image-minimal – very minimal rootfs
  • linaro-image-lamp – LAMP stack and toolchain
  • linaro-image-leg-java – Same as LAMP image but also includes OpenJDK-7 and OpenJDK-8

Building the Linaro Kernel


  • Ubuntu 12.04 64 bit system. You can download Ubuntu from
  • git
sudo apt-get install build-essential git
  • toolchain
mkdir -p ~/bin
cd ~/bin
tar xf gcc-linaro-aarch64-linux-gnu-4.9-2014.08_linux.tar.xz

Get the Linaro Kernel Source

git clone git://
cd linux-linaro-tracking
git checkout `git tag | grep ^ll-${YYYY}${MM} | tail -1`

Where ${YYYY} is the current year, eg. 2015 and ${MM} is the current month, eg, 03.

Create a kernel config

Do not use the arm64 defconfig, instead, build a config from the config fragments that Linaro provides:

ARCH=arm64 scripts/kconfig/ \
linaro/configs/linaro-base.conf \
linaro/configs/linaro-base64.conf \
linaro/configs/distribution.conf \
linaro/configs/vexpress64.conf \
linaro/configs/kvm-host.conf \

Note: the config fragments are part of the git repository and the source tarball.

Build the kernel

To build the kernel uImage, use the following command:

make ARCH=arm64 CROSS_COMPILE=aarch64-linux-gnu- Image

Install your kernel

Create the Device Tree blob if you don’t have one in your Linaro image:

make ARCH=arm64 CROSS_COMPILE=aarch64-linux-gnu- dtbs

Copy the kernel and the DTB file for your model to your fastmodel directory created in the Binary Installation tab.

cp arch/arm64/boot/Image <fastmodel dir>
cp arch/arm64/boot/dts/*.dtb <fastmodel dir>

Building UEFI

To rebuild the UEFI binaries, see the UEFI Wiki and specifically the UEFI build page.

ARM Fast Models

Built from ARM’s Fast Model toolkit, ARM provides a number of ready-to-use simulation models of platforms containing ARM processors, referred to as Fixed Virtual Platforms. This OpenEmbedded Engineering Build is designed to work with a number of those FVPs which contain ARMv8 processors:

  • Foundation_v8
  • FVP_VE_AEMv8A (previously called RTSM_VE_AEMv8A before v5.0)
  • FVP_Base_AEMv8A-AEMv8A
  • FVP_Base_Cortex-A57×4-A53×4
  • FVP_Base_Cortex-A57×1-A53×1

There are two primary platform definitions used by the FVPs:

  • VE – a replica of the ARM Versatile Express hardware development boards
  • Base – an evolution of Versatile Express that can support larger memory maps, multiple clusters and some new standard peripherals

The Foundation platform is a subset of the platform peripherals in the Base platform.

The following table describes the essential differences between these ARMv8 models:

Model Platform Processors GIC Block device ARM TF support1
virtio mmc
FVP_VE_AEMv8A VE AEMv8×1-42 GICv2 y
Foundation_v8 Foundation AEMv8×1-42 GICv2/v34 y y3
FVP_Base_AEMv8Ax4-AEMv8Ax4 Base AEMv8×1-4 + AEMv8×1-42 GICv2/v34 y y y
FVP_Base_Cortex-A57×4-A53×4 Base A57×4 + A53×4 GICv2 y y y
FVP_Base_Cortex-A57×1-A53×1 Base A57×1 + A53×1 GICv2 y y y

The platform support in each of these models does evolve gradually over time, this information is correct with respect to build 0.8.5206 of the Foundation_v8 FVP, and builds 0.8.5108 and 0.8.5202 of the other FVPs.

1 Platforms that do not support the ARM Trusted Firmware need to use a bootwrapper for the kernel image

2 The number of CPU cores in each cluster can be configured when running the AEMv8 FVPs

3 ARM Trusted Firmware requires the new Foundation_v8 FVP (build 0.8.5206), released mid November 2013.

4 The default Device Trees (fvp-base-gicv2-psci.dtb and fvp-foundation-gicv2-psci.dtb) only present a GICv2 interrupt controller node to Linux. However, there are alternative Device Trees (fvp-base-gicv3-psci.dtb and fvp-foundation-gicv3-psci.dtb) provided in this Engineering Build that includes the GICv3 node, which is needed for development of OS and hypervisor GICv3 support. It is also possible to configure the GIC to be GICv2 when running these FVPs.

The Foundation_v8 FVP is free to use and can be downloaded from ARM, the other FVPs are licensed from ARM. More information on the ARM FVPs and download links can be found on the ARM website

OpenJDK FCS Release for AArch64

This release of OpenJDK is a fully functional, stable release of OpenJDK. It has been extensively tested using test suites including the JTREG, JCK and Mauve test suites as well as real work applications such as Hadoop, Eclipse and industry standard benchmarks such as SPECjvm2008 and PECjbb2013. There are no known critical failures when used with standard options. There are a number of failures when used with certain non standard options. These failure are detailed below.

The release is being made available in two variants:

OpenJDK 8

The OpenJDK 8 release is based on the jdk8-b132 tag of the main OpenJDK 8 tree. This corresponds to the “General Availability” release announced on 2014/03/18.

OpenJDK 7

As JDK8 is a complete revision of JDK, a backport to OpenJDK 7 is provided for compatibility with the known and proven JDK7. This release is based on the jdk7u60-b04 tag of the JDK7 updates tree.

Supported HotSpot Technologies

Both OpenJDK releases support the Client (C1) and Server (C2) JIT compilers in addition to supporting the Template assembler based interpreter and the ‘Zero’ C based interpreter.

The default execution model is Server (C2) JIT using Tiered Compilation. Under this model code is first executed by the Template nterpreter. When code is deemed to be ‘hot’ it is compiled initially using the Client (C1) JIT. The code compiled by the C1 JIT contains profiling code. If a method is deemed to be sufficiently hot it is recompiled with the Server (C2) JIT. The C2 JIT uses the profiling information collected by the C1 JIT to generate highly optimal execution path targetted code.

Different execution models may be selected using the following command line options.

  • -server – The default Server (C2) JIT
  • -client – The Client (C1) JIT
  • -Xint – The Template assembler based interpreter
  • -zero – The ‘Zero’ C based interpreter

In addition the following options may be specified

  • -XX:+TieredCompilation
    • The default with -server. Enable Tiered compilation as described above.
  • -XX:-TieredCompilation
    • Disable Tiered compilation. Code is compiled directly with the Server (C2) JIT. Using this option will generate less efficient code because the profiling information gathered by the C1 compiler will not be available.

Source and binary bundles

Source bundles may be downloaded from:

Pre-built binaries may be downloaded from:

Test Results

There are two categories of tests run on a daily basis.

Lava Testing

These tests are run on the foundation model. Because of the performance limitations of the foundation model a limited subset of the JTreg and Mauve test suites are run daily.

Results are published daily.

Hosted Testing

These tests are run on ARMv8 hardware which is hosted in our Cambridge lab but is not publicly available in LAVA. The entire JTreg test suite is run on a daily basis for both Client and Server JITs.

Results are published by email to This email list may be subscribed to by visiting:

Note: that for the hosted tests, if the daily test determines that there has been no change to the source tree then the tests are not run. This is done to reduce bandwidth on the email list.

Note: In spite of the slowness of the tests executed on the foundation model, which prevents complete test execution on a daily basis, we periodically verify that the complete test execution provides the very same identical results on both the foundation model and the real h/w in our lab. Please note that no performance numbers are posted to the mailing lists, only functional pass/fail test results.

Known Failures

-XX:+UseBiasedLocking causes a fatal error

  • The implementation of Biased Locking has not been completed in the AArch64 port and use of this option may cause fatal errors.

Workaround: Do not use the -XX:+UseBiasedLocking option.

Assertion failure with -XX:-InlineObjectHash and -XX:-ProfileInterpreter

This combination of options may cause the following fatal error to be generated:

  • assert(false) failed: this call site should not be polymorphic

Workaround: Do not use this combination of options.


We would like to thank RedHat, and especially Andrew Haley and Andrew Dinn, for starting this project and for the huge efforts they have put in to deliver a high quality, high performance Java solution for AArch64 which is available as Open Software. We would also like to thank all in the Open Source community who have contributed to the projects, either through bug fixes or suggestions or through testing early releases of the software and submitting bug reports.