Skip to content

Dot product optimization pass #20

@jklontz

Description

@jklontz

At the heart of many computer vision algorithms (subspace learning, deep learning, wavelets) is a dot product operation of an incoming image against a filter constructed offline. This idea is to introduce a suite of LLVM optimization passes that leverage the fact that the filter is known at compile time. Specifically:

  1. Completely unroll the dot product loop based on the known filter size.
  2. Substitute memory access instructions with the known constant values for the filter.
  3. Eliminate instructions where the filter value is 0 (or perhaps near-zero)

Together these passes convert the code between a generic dense dot product and a hard-coded sparse dot-product.

As a stretch goal:

  1. Other optimizations which become possible when the dot product can be approximated within a pre-specified margin of error.

This is a long-term research idea and a good paper alone. It is also an example of an interesting idea that becomes possible with a computer vision DSL.

Metadata

Metadata

Assignees

No one assigned

    Labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions