Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

docs: initial draft #4

Merged
merged 6 commits into from
Sep 13, 2024
Merged
Show file tree
Hide file tree
Changes from 3 commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
21 changes: 21 additions & 0 deletions .github/workflows/links.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
name: "Check Links"

on:
push:
branches:
- master
SauravMaheshkar marked this conversation as resolved.
Show resolved Hide resolved
pull_request:
workflow_dispatch:

jobs:
check-links:
guarin marked this conversation as resolved.
Show resolved Hide resolved
name: Detect Broken Links
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4

- name: Run linkspector
uses: umbrelladocs/action-linkspector@v1
with:
reporter: github-pr-review
fail_on_error: true
79 changes: 78 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
@@ -1,2 +1,79 @@
# Awesome Self Supervised Learning [![awesome](https://cdn.rawgit.com/sindresorhus/awesome/d7305f38d29fed78fa85652e3a63e154dd8e8829/media/badge.svg)](https://github.com/sindresorhus/awesome)
# Awesome Self Supervised Learning [![awesome](https://cdn.rawgit.com/sindresorhus/awesome/d7305f38d29fed78fa85652e3a63e154dd8e8829/media/badge.svg)](https://github.com/sindresorhus/awesome) [![Discord](https://img.shields.io/discord/752876370337726585?logo=discord&logoColor=white&label=discord&color=7289da)](https://discord.gg/xvNJW94)
guarin marked this conversation as resolved.
Show resolved Hide resolved

## 2024

| Title | Relevant Links |
|:-----:|:--------------:|
guarin marked this conversation as resolved.
Show resolved Hide resolved
| [Scalable Pre-training of Large Autoregressive Image Models](https://arxiv.org/abs/2401.08541) | [![arXiv](https://img.shields.io/badge/arXiv-2401.08541-b31b1b.svg)](https://arxiv.org/abs/2401.08541) [![Open In Colab](https://img.shields.io/badge/Colab-PyTorch-blue?logo=googlecolab)](https://colab.research.google.com/github/lightly-ai/lightly/blob/master/examples/notebooks/pytorch/aim.ipynb) |
guarin marked this conversation as resolved.
Show resolved Hide resolved
| [SAM 2: Segment Anything in Images and Videos](https://arxiv.org/abs/2408.00714) | [![arXiv](https://img.shields.io/badge/arXiv-2408.00714-b31b1b.svg)](https://arxiv.org/abs/2408.00714) [![Google Drive](https://img.shields.io/badge/Lightly_Reading_Group-4285F4?logo=googledrive&logoColor=white)](https://drive.google.com/file/d/1kWvZclajy7z3ize2KNCLzCfvZN2pDien/view?usp=sharing) |
| [Automatic Data Curation for Self-Supervised Learning: A Clustering-Based Approach](https://arxiv.org/abs/2405.15613) | [![arXiv](https://img.shields.io/badge/arXiv-2405.15613-b31b1b.svg)](https://arxiv.org/abs/2405.15613) |
| [GLID: Pre-training a Generalist Encoder-Decoder Vision Model](https://arxiv.org/abs/2404.07603) | [![arXiv](https://img.shields.io/badge/arXiv-2404.07603-b31b1b.svg)](https://arxiv.org/abs/2404.07603) [![Google Drive](https://img.shields.io/badge/Lightly_Reading_Group-4285F4?logo=googledrive&logoColor=white)](https://drive.google.com/file/d/1CEaZ00z-0hqGKp5cTN8fxP6tsHiHkFye/view?usp=sharing) |
| [Rethinking Patch Dependence for Masked Autoencoders](https://arxiv.org/abs/2401.14391) | [![arXiv](https://img.shields.io/badge/arXiv-2401.14391-b31b1b.svg)](https://arxiv.org/abs/2401.14391) [![Google Drive](https://img.shields.io/badge/Lightly_Reading_Group-4285F4?logo=googledrive&logoColor=white)](https://drive.google.com/file/d/1LtIPoes3y1ZOHD-UBeKgj9AYBoQ-nO5A/view?usp=sharing) |

## 2023

| Title | Relevant Links |
|:-----:|:--------------:|
| [A Cookbook of Self-Supervised Learning](https://arxiv.org/abs/2304.12210) | [![arXiv](https://img.shields.io/badge/arXiv-2304.12210-b31b1b.svg)](https://arxiv.org/abs/2304.12210) |
| [Masked Autoencoders Enable Efficient Knowledge Distillers](https://arxiv.org/abs/2208.12256) | [![arXiv](https://img.shields.io/badge/arXiv-2208.12256-b31b1b.svg)](https://arxiv.org/abs/2208.12256) [![Google Drive](https://img.shields.io/badge/Lightly_Reading_Group-4285F4?logo=googledrive&logoColor=white)](https://drive.google.com/file/d/1bzuOab5fvKK7jpxv5bMoGk1gW446SCUL/view?usp=sharing) |
| [Understanding and Generalizing Contrastive Learning from the Inverse Optimal Transport Perspective](https://openreview.net/forum?id=DBlWCsOy94) | [![CVPR](https://img.shields.io/badge/CVPR-ICML_2023-b31b1b.svg)](https://openreview.net/forum?id=DBlWCsOy94) [![Google Drive](https://img.shields.io/badge/Lightly_Reading_Group-4285F4?logo=googledrive&logoColor=white)](https://drive.google.com/file/d/1hBEy-yh_KtkqY3rjeato-Cuo6ITzhowr/view?usp=sharing) |
| [CycleCL: Self-supervised Learning for Periodic Videos](https://arxiv.org/abs/2311.03402) | [![arXiv](https://img.shields.io/badge/arXiv-2311.03402-b31b1b.svg)](https://arxiv.org/abs/2311.03402) [![Google Drive](https://img.shields.io/badge/Lightly_Reading_Group-4285F4?logo=googledrive&logoColor=white)](https://drive.google.com/file/d/1BDC891HX_JxF84UK_x8RKgHZockJqQFU/view?usp=sharing) |
| [Temperature Schedules for Self-Supervised Contrastive Methods on Long-Tail Data](https://arxiv.org/abs/2303.13664) | [![arXiv](https://img.shields.io/badge/arXiv-2303.13664-b31b1b.svg)](https://arxiv.org/abs/2303.13664) [![Google Drive](https://img.shields.io/badge/Lightly_Reading_Group-4285F4?logo=googledrive&logoColor=white)](https://drive.google.com/file/d/1RabJuwtOevH9hg9wuFTN4z8y4gjQxCT_/view?usp=sharing) |
| [Reverse Engineering Self-Supervised Learning](https://arxiv.org/abs/2305.15614) | [![arXiv](https://img.shields.io/badge/arXiv-2305.15614-b31b1b.svg)](https://arxiv.org/abs/2305.15614) [![Google Drive](https://img.shields.io/badge/Lightly_Reading_Group-4285F4?logo=googledrive&logoColor=white)](https://drive.google.com/file/d/1KsqV9_HE0y0EwlNivUdZPKqqCkdM-4HB/view?usp=sharing) |
| [Improved baselines for vision-language pre-training](https://arxiv.org/abs/2305.08675) | [![arXiv](https://img.shields.io/badge/arXiv-2305.08675-b31b1b.svg)](https://arxiv.org/abs/2305.08675) [![Google Drive](https://img.shields.io/badge/Lightly_Reading_Group-4285F4?logo=googledrive&logoColor=white)](https://drive.google.com/file/d/1CNLvxt1jri7chCGy2ZqXBzDwPko0s6QP/view?usp=sharing) |
| [DINOv2: Learning Robust Visual Features without Supervision](https://arxiv.org/abs/2304.07193) | [![arXiv](https://img.shields.io/badge/arXiv-2304.07193-b31b1b.svg)](https://arxiv.org/abs/2304.07193) [![Google Drive](https://img.shields.io/badge/Lightly_Reading_Group-4285F4?logo=googledrive&logoColor=white)](https://drive.google.com/file/d/11szszgtsYESO3QF8jkFsLFTVtN797uH2/view?usp=sharing) |
| [Segment Anything](https://arxiv.org/abs/2304.02643) | [![arXiv](https://img.shields.io/badge/arXiv-2304.02643-b31b1b.svg)](https://arxiv.org/abs/2304.02643) [![Google Drive](https://img.shields.io/badge/Lightly_Reading_Group-4285F4?logo=googledrive&logoColor=white)](https://drive.google.com/file/d/18yPuL8J6boi5pB1NRO6VAUbYEwmI3tFo/view?usp=sharing) |
| [Self-Supervised Learning from Images with a Joint-Embedding Predictive Architecture](https://arxiv.org/abs/2301.08243) | [![arXiv](https://img.shields.io/badge/arXiv-2301.08243-b31b1b.svg)](https://arxiv.org/abs/2301.08243) [![Google Drive](https://img.shields.io/badge/Lightly_Reading_Group-4285F4?logo=googledrive&logoColor=white)](https://drive.google.com/file/d/1l5nHxqqbv7o3ESw3DLBqgJyXILJ0FgH6/view?usp=sharing) |

## 2022

| Title | Relevant Links |
|:-----:|:--------------:|
| [Masked Siamese Networks for Label-Efficient Learning](https://arxiv.org/abs/2204.07141) | [![arXiv](https://img.shields.io/badge/arXiv-2204.07141-b31b1b.svg)](https://arxiv.org/abs/2204.07141) [![Google Drive](https://img.shields.io/badge/Lightly_Reading_Group-4285F4?logo=googledrive&logoColor=white)](https://drive.google.com/file/d/15WGpYpxy4_1a927RWrmlkeJohZDznN8e/view?usp=sharing) [![Open In Colab](https://img.shields.io/badge/Colab-PyTorch-blue?logo=googlecolab)](https://colab.research.google.com/github/lightly-ai/lightly/blob/master/examples/notebooks/pytorch/msn.ipynb) |
| [The Hidden Uniform Cluster Prior in Self-Supervised Learning](https://arxiv.org/abs/2210.07277) | [![arXiv](https://img.shields.io/badge/arXiv-2210.07277-b31b1b.svg)](https://arxiv.org/abs/2210.07277) [![Open In Colab](https://img.shields.io/badge/Colab-PyTorch-blue?logo=googlecolab)](https://colab.research.google.com/github/lightly-ai/lightly/blob/master/examples/notebooks/pytorch/pmsn.ipynb) |
| [Unsupervised Visual Representation Learning by Synchronous Momentum Grouping](https://arxiv.org/abs/2207.06167) | [![arXiv](https://img.shields.io/badge/arXiv-2207.06167-b31b1b.svg)](https://arxiv.org/abs/2207.06167) [![Open In Colab](https://img.shields.io/badge/Colab-PyTorch-blue?logo=googlecolab)](https://colab.research.google.com/github/lightly-ai/lightly/blob/master/examples/notebooks/pytorch/smog.ipynb) |
| [TiCo: Transformation Invariance and Covariance Contrast for Self-Supervised Visual Representation Learning](https://arxiv.org/abs/2206.10698) | [![arXiv](https://img.shields.io/badge/arXiv-2206.10698-b31b1b.svg)](https://arxiv.org/abs/2206.10698) [![Open In Colab](https://img.shields.io/badge/Colab-PyTorch-blue?logo=googlecolab)](https://colab.research.google.com/github/lightly-ai/lightly/blob/master/examples/notebooks/pytorch/tico.ipynb) |
| [VICReg: Variance-Invariance-Covariance Regularization for Self-Supervised Learning](https://arxiv.org/abs/2105.04906) | [![arXiv](https://img.shields.io/badge/arXiv-2105.04906-b31b1b.svg)](https://arxiv.org/abs/2105.04906) [![Open In Colab](https://img.shields.io/badge/Colab-PyTorch-blue?logo=googlecolab)](https://colab.research.google.com/github/lightly-ai/lightly/blob/master/examples/notebooks/pytorch/vicreg.ipynb) |
| [VICRegL: Self-Supervised Learning of Local Visual Features](https://arxiv.org/abs/2210.01571) | [![arXiv](https://img.shields.io/badge/arXiv-2210.01571-b31b1b.svg)](https://arxiv.org/abs/2210.01571) [![Open In Colab](https://img.shields.io/badge/Colab-PyTorch-blue?logo=googlecolab)](https://colab.research.google.com/github/lightly-ai/lightly/blob/master/examples/notebooks/pytorch/vicregl.ipynb) |
| [VideoMAE: Masked Autoencoders are Data-Efficient Learners for Self-Supervised Video Pre-Training](https://arxiv.org/abs/2203.12602) | [![arXiv](https://img.shields.io/badge/arXiv-2203.12602-b31b1b.svg)](https://arxiv.org/abs/2203.12602) [![Google Drive](https://img.shields.io/badge/Lightly_Reading_Group-4285F4?logo=googledrive&logoColor=white)](https://drive.google.com/file/d/1F0oyiyyxCKzWS9Gv8TssHxaCMFnAoxfb/view?usp=sharing) |
| [Improving Visual Representation Learning through Perceptual Understanding](https://arxiv.org/abs/2212.14504) | [![arXiv](https://img.shields.io/badge/arXiv-2212.14504-b31b1b.svg)](https://arxiv.org/abs/2212.14504) [![Google Drive](https://img.shields.io/badge/Lightly_Reading_Group-4285F4?logo=googledrive&logoColor=white)](https://drive.google.com/file/d/1n4Y0iiM368RaPxPg6qvsfACguaolFnhf/view?usp=sharing) |
| [RankMe: Assessing the downstream performance of pretrained self-supervised representations by their rank](https://arxiv.org/abs/2210.02885) | [![arXiv](https://img.shields.io/badge/arXiv-2210.02885-b31b1b.svg)](https://arxiv.org/abs/2210.02885) [![Google Drive](https://img.shields.io/badge/Lightly_Reading_Group-4285F4?logo=googledrive&logoColor=white)](https://drive.google.com/file/d/1cEP1_G2wMM3-AMMrdntGN6Fq1E5qwPi1/view?usp=sharing) |

## 2021

| Title | Relevant Links |
|:-----:|:--------------:|
| [Barlow Twins: Self-Supervised Learning via Redundancy Reduction](https://arxiv.org/abs/2103.03230) | [![arXiv](https://img.shields.io/badge/arXiv-2103.03230-b31b1b.svg)](https://arxiv.org/abs/2103.03230) [![Open In Colab](https://img.shields.io/badge/Colab-PyTorch-blue?logo=googlecolab)](https://colab.research.google.com/github/lightly-ai/lightly/blob/master/examples/notebooks/pytorch/barlowtwins.ipynb) |
| [Decoupled Contrastive Learning](https://arxiv.org/abs/2110.06848) | [![arXiv](https://img.shields.io/badge/arXiv-2110.06848-b31b1b.svg)](https://arxiv.org/abs/2110.06848) [![Open In Colab](https://img.shields.io/badge/Colab-PyTorch-blue?logo=googlecolab)](https://colab.research.google.com/github/lightly-ai/lightly/blob/master/examples/notebooks/pytorch/dcl.ipynb) |
| [Dense Contrastive Learning for Self-Supervised Visual Pre-Training](https://arxiv.org/abs/2011.09157) | [![arXiv](https://img.shields.io/badge/arXiv-2011.09157-b31b1b.svg)](https://arxiv.org/abs/2011.09157) [![Open In Colab](https://img.shields.io/badge/Colab-PyTorch-blue?logo=googlecolab)](https://colab.research.google.com/github/lightly-ai/lightly/blob/master/examples/notebooks/pytorch/densecl.ipynb) |
| [Emerging Properties in Self-Supervised Vision Transformers](https://arxiv.org/abs/2104.14294) | [![arXiv](https://img.shields.io/badge/arXiv-2104.14294-b31b1b.svg)](https://arxiv.org/abs/2104.14294) [![Open In Colab](https://img.shields.io/badge/Colab-PyTorch-blue?logo=googlecolab)](https://colab.research.google.com/github/lightly-ai/lightly/blob/master/examples/notebooks/pytorch/dino.ipynb) |
| [Masked Autoencoders Are Scalable Vision Learners](https://arxiv.org/abs/2111.06377) | [![arXiv](https://img.shields.io/badge/arXiv-2111.06377-b31b1b.svg)](https://arxiv.org/abs/2111.06377) [![Open In Colab](https://img.shields.io/badge/Colab-PyTorch-blue?logo=googlecolab)](https://colab.research.google.com/github/lightly-ai/lightly/blob/master/examples/notebooks/pytorch/mae.ipynb) |
| [With a Little Help from My Friends: Nearest-Neighbor Contrastive Learning of Visual Representations](https://arxiv.org/abs/2104.14548) | [![arXiv](https://img.shields.io/badge/arXiv-2104.14548-b31b1b.svg)](https://arxiv.org/abs/2104.14548) [![Open In Colab](https://img.shields.io/badge/Colab-PyTorch-blue?logo=googlecolab)](https://colab.research.google.com/github/lightly-ai/lightly/blob/master/examples/notebooks/pytorch/nnclr.ipynb) |
| [SimMIM: A Simple Framework for Masked Image Modeling](https://arxiv.org/abs/2111.09886) | [![arXiv](https://img.shields.io/badge/arXiv-2111.09886-b31b1b.svg)](https://arxiv.org/abs/2111.09886) [![Open In Colab](https://img.shields.io/badge/Colab-PyTorch-blue?logo=googlecolab)](https://colab.research.google.com/github/lightly-ai/lightly/blob/master/examples/notebooks/pytorch/simmim.ipynb) |
| [Exploring Simple Siamese Representation Learning](https://arxiv.org/abs/2011.10566) | [![arXiv](https://img.shields.io/badge/arXiv-2011.10566-b31b1b.svg)](https://arxiv.org/abs/2011.10566) [![Open In Colab](https://img.shields.io/badge/Colab-PyTorch-blue?logo=googlecolab)](https://colab.research.google.com/github/lightly-ai/lightly/blob/master/examples/notebooks/pytorch/simsiam.ipynb) |

## 2020

| Title | Relevant Links |
|:-----:|:--------------:|
| [Bootstrap your own latent: A new approach to self-supervised Learning](https://arxiv.org/abs/2006.07733) | [![arXiv](https://img.shields.io/badge/arXiv-2006.07733-b31b1b.svg)](https://arxiv.org/abs/2006.07733) [![Open In Colab](https://img.shields.io/badge/Colab-PyTorch-blue?logo=googlecolab)](https://colab.research.google.com/github/lightly-ai/lightly/blob/master/examples/notebooks/pytorch/byol.ipynb) |
| [A Simple Framework for Contrastive Learning of Visual Representations](https://arxiv.org/abs/2002.05709) | [![arXiv](https://img.shields.io/badge/arXiv-2002.05709-b31b1b.svg)](https://arxiv.org/abs/2002.05709) [![Open In Colab](https://img.shields.io/badge/Colab-PyTorch-blue?logo=googlecolab)](https://colab.research.google.com/github/lightly-ai/lightly/blob/master/examples/notebooks/pytorch/simclr.ipynb) |
| [Unsupervised Learning of Visual Features by Contrasting Cluster Assignments](https://arxiv.org/abs/2006.09882) | [![arXiv](https://img.shields.io/badge/arXiv-2006.09882-b31b1b.svg)](https://arxiv.org/abs/2006.09882) [![Open In Colab](https://img.shields.io/badge/Colab-PyTorch-blue?logo=googlecolab)](https://colab.research.google.com/github/lightly-ai/lightly/blob/master/examples/notebooks/pytorch/swav.ipynb) |

## 2019

| Title | Relevant Links |
|:-----:|:--------------:|
| [Momentum Contrast for Unsupervised Visual Representation Learning](https://arxiv.org/abs/1911.05722) | [![arXiv](https://img.shields.io/badge/arXiv-1911.05722-b31b1b.svg)](https://arxiv.org/abs/1911.05722) [![Open In Colab](https://img.shields.io/badge/Colab-PyTorch-blue?logo=googlecolab)](https://colab.research.google.com/github/lightly-ai/lightly/blob/master/examples/notebooks/pytorch/moco.ipynb) |

## 2018

| Title | Relevant Links |
|:-----:|:--------------:|
| [Unsupervised Feature Learning via Non-Parametric Instance-level Discrimination](https://arxiv.org/abs/1805.01978) | [![arXiv](https://img.shields.io/badge/arXiv-1805.01978-b31b1b.svg)](https://arxiv.org/abs/1805.01978) |

## 2016

| Title | Relevant Links |
|:-----:|:--------------:|
| [Context Encoders: Feature Learning by Inpainting](https://arxiv.org/abs/1604.07379) | [![arXiv](https://img.shields.io/badge/arXiv-1604.07379-b31b1b.svg)](https://arxiv.org/abs/1604.07379) |