Göm meny

Reading Group in Computer and Robot Vision

  • The purpose of this reading group is to provide an overview of the rapidly evolving computer vision literature. We meet once a week to discuss an article that all participants should read before the seminar.
  • All participants are welcome to suggest, and present articles of their choosing. The theme for articles should be computer and robot vision, with emphasis on new, high-impact conference papers (e.g. from ICCV, CVPR, RSS, or ECCV). To make best use of time, you may consider choosing an article that relates to your work, that you presumably would read anyway. If the article is well written that is also a plus.
  • You have the option to attend the seminars as part of a PhD course. You will get one 1hp for each time you present a paper, and participate in another three seminars. If you want to go for the PhD course option, let me know in advance by sending me an email with your personal number, so I can register your attendance.
    /Per-Erik

  • Meeting room: Visionen, Stora Konf.rummet, Campus Valla, Building B.

  • Time: Thursdays at 14.00-15.00 (note: no academic quarter!).

  • E-mail list: Upcoming meetings and articles are announced on the mailing list vision-seminars.

Upcoming articles

  • Oct 6: Johan presents: P. Hruby et al. Learning to Solve Hard Minimal Problems, CVPR'22 [PDF]
  • Oct 27: First available slot.

Article suggestions

  • Have a look at e.g. the following proceedings: ICCV'21, CVPR'21, ECCV'20, NeurIPS'20, SIGGRAPH'21, RSS'2021, ACCV'21. Some old, unused suggestions are listed below.
  • Oznan Sener and Vladlen Koltun, "Multi-task Learning as Multi-Objective Optimization", NeurIPS18 [PDF] SD201106
  • Qianqian Wang et al., "Learning Feature Descriptors using Camera Pose Supervision", ECCV20 [PDF] [GIT] SD200812
  • Ruiqu Gao et al. "Flow Contrastive Estimation of Energy-Based Models", CVPR20 [PDF] SD200630
  • A. Ilyas et al., "Adversarial Examples Are Not Bugs, They Are Features", ArXiv 2019 [PDF] [blog post] SD190515
  • M. Nakada et al., Deep Learning of Biomimetic Sensorimotor Control for Biomechanical Human Animation, TOG 2018 [PDF] SD201218
  • T. Takikawa et al., Neural Geometric Level of Detail: Real-time Rendering with Implicit 3D Surfaces ArXiv'21 [PDF] [GIT] SD210130
  • A. Dosovitskiy et al., An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale, [ICLR21] SD210224
  • M. Charon et al., Emerging Properties in Self-Supervised Vision Transformers, ArXiv'21 [PDF] [blog] [GIT] SD210506

Paper log spring 2022

  • April 7: Joakim presents: Olivier J. Hénaff et al. Object discovery and representation networks, ArXiV'22, [PDF]
  • April 28: Johan presents: A. Ramesh et al., DALL-E 2: Hierarchical Text-Conditional Image Generation with CLIP Latents, ArXiv 2022 [ArXiv][webpage]
  • May 12: Karl presents: Data-free Knowledge Distillation for Object Detection. WACV2021, [PDF] [webpage]
  • May 19: Joakim presents: Z. Li et al., BEVFormer: Learning Bird's-Eye-View Representation from Multi-Camera Images via Spatiotemporal Transformers, ArXiv'22 [PDF] [GIT]

Paper log autumn 2022

  • Sept 8: Joakim presents: A. W. Harley et al. A Simple Baseline for BEV Perception Without LiDAR, ArXiv 2022 [PDF] [project page]

Old paper logs


Senast uppdaterad: 2022-09-30