layout |
---|
default |
This is a hands-on tutorial intended to present state-of-the-art deep learning models and equip vision researchers with the tools and know-how to incorporate deep learning into their work. Deep learning models and deep features have recently achieved strong results in classification and recognition, detection, and segmentation, but a common framework and shared models are needed to advance further work and reduce the barrier to entry.
To this end we present the Caffe – Convolutional Architecture for Fast Feature Embedding – framework that offers an open-source library, public reference models, and worked examples for deep learning in vision. Demos will be done live and the audience will be able to follow along with examples. To do so, follow the tutorial installation instructions.
Check out the tutorial guide on the tutorial edition of the Caffe site for documentation, examples, and more.
Slides
- Caffe Tutorial, pt. 1: the Caffe tour
- Caffe Tutorial, pt. 2: deep learning for detection, fine-tuning, training analysis
This half-day morning tutorial is held Sunday, September 7 at 9:00 — 12:30. Part one 9:00 - 10:30 will give a tour of Caffe and examples. Part two 11:00 - 12:30 will continue the tour and cover fine-tuning, model interpretation and analysis, and R-CNN. There will a break for open discussion and coffee at 10:30 – 11:00.
Caffe Tour 9 - 10:30
- Introduction: deep learning and the need for frameworks
- Philosophy: expressive, fast, modular, and open
- Model Anatomy: nets, layers, blobs
- Forward / Backward computation
- Data
- Loss
- Solver / model optimization
- Interfaces for command line, Python, and MATLAB
Caffeine 10:30 - 11:00
- coffee!
- examples / demos
- discussion
Fine-tuning, Model Analysis, and More 10:00 - 12:30
- Fine-tuning
- Model interpretation and analysis
- R-CNN
- tips and tricks
Beginners: you will be equipped with the tools and know-how to begin or improve your deep learning toolkit through talks, worked examples, and demos.
Intermediate: follow along for a walkthrough of model definition and development, framework extension, and tips and tricks in practice will be covered.
Advanced: join for discussion of advanced modeling and optimization as well as news of the latest developments brewing in Caffe.
Everything will be bundled in the next Caffe release, and is published now on this site and in the dev branch of BVLC/caffe.
- Evan Shelhamer @shelhamer. PhD student, UC Berkeley CS / ICSI.
- Jeff Donahue @jeffdonahue. PhD student, UC Berkeley CS / ICSI.
- Yangqing Jia @Yangqing. Research Scientist, Google / UC Berkeley.
- Ross Girshick @rbgirshick. Research Scientist, UC Berkeley.
The tutorial organizers would like to thank
- the Caffe team, the Berkeley Vision and Learning Center, and BVLC PI Trevor Darrell
- our open-source contributors and the Caffe community as a whole
- NVIDIA and the NVIDIA Academic program for collaboration and GPU donation
for all the opportunities in our first year brewing deep nets in Caffe.