Skip to content

Add Backend LLVM Code Gen Support#141

Draft
SousaTrashBin wants to merge 66 commits intomasterfrom
add_gpu_support
Draft

Add Backend LLVM Code Gen Support#141
SousaTrashBin wants to merge 66 commits intomasterfrom
add_gpu_support

Conversation

@SousaTrashBin
Copy link
Copy Markdown
Collaborator

@SousaTrashBin SousaTrashBin commented Mar 12, 2026

CPU

Implementation

  • Add LLVM decorator (supports params like debug, cache, etc.) ◆
  • Add LLVM Subset ◇
  • Add Type Conversion ◇
  • Implement LLVM Subset validation function ◆
  • Add AST to LLVM Subset Conversor ◆
  • Add LLVM Subset to AST Converter (llvmlite adapter) ◇
  • Add code gen ◇
  • Add LLVM code execution (via llvmlite) ◇
  • Trigger Kernel generation/execution & Aeon data type conversion ◇

Possible Improvements

  • Add support to C functions (printf, free, malloc) ◇
  • Add support to arrays ◇
  • Add LLVM optimizations (liquid types) ◇
  • Add JIT cache ◇

GPU

Implementation

  • Add GPU decorator (supports gpu type, block/thread count, etc.) ◆

CUDA

  • Add LLVM Subset (need to check if shareable between CUDA/AMDGPU) ◇
  • Add Type Conversion ◇
  • Add AST to LLVM Subset Conversor and implement validation function ◇
  • Add LLVM Subset to AST Converter (llvmlite adapter) & code gen ◇
  • Add LLVM code execution (via llvmlite) ◇
  • Trigger Kernel generation/execution & Aeon data type conversion ◇

Possible Improvements

...

Symbol Meaning

  • Has Tests | Needs Tests

@SousaTrashBin
Copy link
Copy Markdown
Collaborator Author

In which step of the pipeline would it make more sense to generate the LLVM IR code? during evaluation? or would it make sense to add an extra step on the pipeline that would just create that intermediate representation, which could then be called on the eval?

@SousaTrashBin
Copy link
Copy Markdown
Collaborator Author

Also, initially I had the idea of having a specific function to run the kernels, but since I will be generating the LLVM IR, wouldn't it make more sense/be seamless to the user if the only thing the user had to do on their code was add the gpu decorator? I feel like it wouldn't make sense to add a specific config data structure just for that, specially since I will be needing to generate the kernel, unless I evaluate the expression, extract the params and then try to pass them to the kernel, but that seems a bit sketchy, when it could be done by adding parameters to the decorator

@SousaTrashBin SousaTrashBin requested a review from alcides March 13, 2026 02:10
@gpu
Copy link
Copy Markdown

gpu commented Mar 13, 2026

Although I was tagged rather accidentally here (that happens often, greetings to @cpu ), it somehow reminded me of an experiment that I made many years ago: http://jocl.org/GroovyGPU/ . Good luck with whatever you're building here 👍

@alcides alcides marked this pull request as draft March 17, 2026 15:00
@SousaTrashBin SousaTrashBin changed the title Add Gpu Support Add Backend LLVM Code Gen Support Mar 17, 2026
@alcides
Copy link
Copy Markdown
Owner

alcides commented Mar 18, 2026

Although I was tagged rather accidentally here (that happens often, greetings to @cpu ), it somehow reminded me of an experiment that I made many years ago: http://jocl.org/GroovyGPU/ . Good luck with whatever you're building here 👍

I remember your project! A couple of years before, I did a similar thing for java (https://github.com/AEminium/AeminiumGPU and https://github.com/AEminium/AeminiumGPUCompiler/)! Really cool!

Sorry for bothering you!

…ication (needs further testing, but probably will be easier to create e2e tests)
…ot execute, It can still generate llvmir code, It's hard to verify if there are no partial applications when anf and curry is considered
…hings are more isolated and less likely to break
…ignatures, enhance LLVM tests with additional e2e scenarios
…pecific handling in recursive evaluation
…ations, and enhance abstraction generation
…tion support, refactor validation steps and type inference methods
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants