Skip to content

arm-education/IEEE-Workshop-Labs

Repository files navigation

education_arm_logo

IEEE AI on Arm Professional Training Workshop Labs

Neoverse-based Labs for 2026 IEEE Workshop

  • KleidiAI and Arm Architectural Features for Optimizing Generative AI on Neoverse and Armv8-A
  • Deploying ML models at the edge on Arm Ethos-U Neural Processors using ExecuTorch (Simulated via Fixed Virtual Platform run on Neoverse CPU)

Launch AWS, connect to instance, and clone repo

  1. Launch an AWS EC2 instance

    • Go to Amazon EC2 and create a new instance.
    • Select key pair: Create a key for SSH connection (e.g., yourkey.pem).
    • Choose an AMI: Use the Ubuntu 22.04 AMI as the operating system.
    • Instance type: Select m7g.xlarge (Graviton-based instance with Arm Neoverse cores).
    • Storage: Add 64 GB of root storage.
  2. Connect to the instance via SSH
    Use the following command to establish an SSH connection (replace with your instance details):

    ssh -i "yourkey.pem" -L 8888:localhost:8888 ubuntu@<ec2-public-dns>
  3. Clone the repository
    Once connected to the instance, clone the repository:

    git clone https://github.com/arm-education/IEEE-Workshop-Labs.git

Neoverse Lab 1 setup

  1. Run the setup script
    Change to the repository directory and run the setup script:

    ./setup_neoverse_lab_1.sh
  2. Activate the virtual environment
    After the setup completes, activate the virtual environment:

    source graviton_env/bin/activate
  3. Log In or Sign Up to Hugging Face
    If you don’t already have a Hugging Face account, create one. Otherwise, log in to your existing account.

  4. Visit the Model Page
    Navigate to the apple/OpenELM page on Hugging Face.

  5. Request Access
    On the model's page, click the "Access repository" button if present. You’ll be prompted to review and agree to the terms of use.

  6. Visit the Llamma Model Page Navigate to the meta-llama/Llama-2-7b-hf page on Hugging Face. You need to do this, as the OpenELM model uses the Llamma Tokenizer.

  7. Request Access
    On the model's page, click the "Access repository" button. You’ll be prompted to review and agree to the terms of use.

  8. Wait for Approval
    After agreeing to the terms, access will be granted. This may take a few hours where you should receive a notification via email.

  9. Login via the Command Line

    To download the model, you may need to authenticate your Hugging Face account on your local machine. Run the following command in your terminal and follow the prompts to log in. This will involve creating an access token for your local machine - a 'Read' token is sufficient:

    huggingface-cli login

    You are now ready to start Lab 1.

    To download the model in advance, run the first code cell in Lab 1.

ExecuTorch Lab 2 setup

  1. Run the setup script
    Change to the repository directory and run the setup script:

    ./setup_executorch_lab_2.sh
  2. On your terminal in the base directory of this repo, activate the virtual environment and navigate to the provided NPU_Lab_ExecuTorch/executorch folder.

    source ./NPU_lab_venv/bin/activate
    cd NPU_Lab_ExecuTorch/executorch
  3. From the terminal inside the executorch directory, run the following two scripts:

    ./install_executorch.sh
    ./examples/arm/setup.sh --i-agree-to-the-contained-eula 
    cd ../..

Both scripts might take some time to install.

You are now ready to start Lab 2.

  1. Launch the lab
    Start Jupyter Lab by running:
    jupyter lab
    Copy the link provided in the terminal output, open it in your local browser, and follow the instructions in the notebooks.