Showing posts with label IREE. Show all posts
Showing posts with label IREE. Show all posts

9/22/2024

What is TorchOps.cpp.inc in torch-mlir

 

What is TorchOps.cpp.inc?

  • TorchOps.cpp.inc: This file contains implementations of the operations for the torch-mlir dialect. It is typically generated from .td (TableGen) files that define the dialect and its operations.
  • The .td (TableGen) files describe MLIR operations in a high-level, declarative form, and the cmake build process automatically generates .cpp.inc files (like TorchOps.cpp.inc) from these .td files.

How it gets generated:

  1. TableGen: The TableGen tool processes .td files that define the operations and attributes for the torch dialect.
  2. CMake Build: During the CMake build process, the mlir-tblgen tool is invoked to generate various .inc files, including TorchOps.cpp.inc.

Where It Is Generated:

The TorchOps.cpp.inc file is usually generated in the build directory under the subdirectories for the torch-mlir project. For example:


build/tools/torch-mlir/lib/Dialect/Torch/IR/TorchOps.cpp.inc

This file gets included in the compiled source code to provide the implementation of the Torch dialect operations.

How to Ensure It Is Generated:

If the file is missing, it's likely because there was an issue in the build process. Here’s how to ensure it’s generated:

  1. Ensure CMake and Ninja Build: Make sure the CMake and Ninja build process is working correctly by following the steps we discussed earlier. You can check that the TorchOps.cpp.inc file is generated by looking in the build directory:

    ls build/tools/torch-mlir/lib/Dialect/Torch/IR/
  2. Check for TableGen Files: Make sure that the .td files (such as TorchOps.td) are present in the source directory. These are used by mlir-tblgen to generate the .cpp.inc files.

Debugging if Not Generated:

If TorchOps.cpp.inc or similar files are not generated, ensure:

  • You are running the full build using ninja or make.
  • mlir-tblgen is being invoked during the build process (you should see log messages referencing mlir-tblgen).

IREE test code and explanation

.

from iree import compiler, runtime
import numpy as np
import sys

def print_step(step):
print(f'Step: {step}', file=sys.stderr)

# MLIR code as a string
module_str = '''
func.func @simple_add(%arg0: tensor<4xf32>, %arg1: tensor<4xf32>) -> tensor<4xf32> {
%0 = arith.addf %arg0, %arg1 : tensor<4xf32>
return %0 : tensor<4xf32>
}
'''

print_step('Compiling module')
compiled_module = compiler.compile_str(module_str, target_backends=['llvm-cpu'])

print_step('Creating runtime config')
config = runtime.Config('local-task')

print_step('Creating system context')
ctx = runtime.SystemContext(config=config)

print_step('Creating VM instance')
vm_instance = runtime.VmInstance()

print_step('Creating VM module')
vm_module = runtime.VmModule.from_flatbuffer(vm_instance, compiled_module, warn_if_copy=False)

print_step('Adding VM module to context')
ctx.add_vm_module(vm_module)

print_step('Getting device')
device = runtime.get_driver('local-task').create_default_device()
print(f'Device: {device}', file=sys.stderr)

print_step('Getting function')
f = ctx.modules.module.simple_add

print_step('Creating device arrays')
arg1 = runtime.asdevicearray(device, np.array([1.0, 2.0, 3.0, 4.0], dtype=np.float32))
arg2 = runtime.asdevicearray(device, np.array([5.0, 6.0, 7.0, 8.0], dtype=np.float32))

print_step('Calling function')
result = f(arg1, arg2)

print_step('Getting result')
print(result.to_host())

print_step('Script completed successfully')

..

To run this code:

  1. Save it to a file, e.g., test_iree.py.
  2. Make sure you have IREE and its Python bindings installed and properly set up in your environment.
  3. Run the script using Python:
    python test_iree.py

This script will:

  1. Define a simple MLIR function that adds two 4-element float32 tensors.
  2. Compile this MLIR code to an IREE module.
  3. Set up the IREE runtime environment.
  4. Create input data as NumPy arrays.
  5. Execute the compiled function with the input data.
  6. Print the result.

The output should show each step of the process and finally print the result, which should be [ 6. 8. 10. 12.].

This example demonstrates the basic workflow for testing MLIR code with IREE using Python. You can modify the MLIR code string and input data to test different functions and operations as needed.



9/17/2024

What is IREE turbine

 IREE-Turbine is a package or toolset that combines PyTorch, Torch-MLIR, IREE, and additional tools to provide a comprehensive solution for compiling, optimizing, and executing PyTorch models using IREE's infrastructure. Based on the information in the image, IREE-Turbine offers the following key features:


1. AOT Export: This allows for Ahead-Of-Time compilation of PyTorch modules (nn.Modules) into deployment-ready artifacts. These compiled artifacts can then take full advantage of IREE's runtime features.


2. Eager Execution: It provides a torch.compile backend and a Turbine Tensor/Device for interactive PyTorch sessions. This enables users to work with PyTorch in a familiar environment while leveraging IREE's optimization capabilities.


3. Custom Ops: IREE-Turbine offers integration for defining custom PyTorch operations and implementing them using either IREE's backend IR or the Pythonic kernel language. This allows for extending PyTorch's functionality while maintaining compatibility with IREE's optimization pipeline.


In essence, IREE-Turbine acts as a bridge between PyTorch and IREE, allowing PyTorch users to benefit from IREE's advanced compilation and runtime features while maintaining a familiar PyTorch-based workflow. It aims to provide a seamless experience for compiling PyTorch models to run efficiently on various hardware targets supported by IREE.