©2019 by International Workshop and Challenge on Real-World Recognition from Low-Quality Images and Videos (RLQ2019).

The International Workshop and Challenge on

Real-World Recognition

from Low-Quality Images and Videos

(RLQ)

 

in conjunction with ICCV 2019

- to understand the robustness of algorithms in non-ideal visual environments

Photo: Shutterstock

 

Program

Sunday, 27 Oct, 317BCCOEX Convention Center

[Morning Session]

  • 9:00        Opening Remarks

  • 9:15        Invited Talk 1: Deep Pedestrian Detection across Occlusion with Geometric Context

                     Gang Hua (VP & Chief Scientist, Wormpex AI Research)

  • 10:00     Coffee Break and Poster Session I

  • 11:00     Invited Talk 2: Robust Face Recognition and Verification from Low-Quality Images and Videos

                     Rama Chellappa (Professor, University of Maryland)

  • 11:45     Invited Talk 3: Unconstrained Computer Vision

                     Manmohan Chandraker (Professor, UC San Diego)

  • 12:30     Lunch Break

[Afternoon Session]

  • 13:30    Invited Talk 4: Tackling Person Identification at a Distance: Pose, Resolution and Gait

                    Xiaoming Liu (Professor, Michigan State University)

  • 14:15     Oral 1: Are Adversarial Robustness and Common Perturbation Robustness Independent Attributes?

                  Alfred Laugros*; Alice Caplier; Matthieu Ospici (Atos)

  • 14:25    Oral 2: SNIDER: Single Noisy Image Denoising and Rectification for Improving License Plate Recognition

                  Younkwan Lee*; Juhyun Lee; Hoyeon Ahn; Moongu Jeon (GIST)

  • 14:35     Oral 3: Non-discriminative data or weak model? On the relative importance of data and model resolution

                  Mark Sandler*; Jonathan Baccash; Andrey Zhmoginov; Andrew Howard (Google)

  • 14:45     QMUL Challenge Introduction and Results Announcement

  • 15:00     Challenge Winner Talk: Towards Real-world Low-quality Face Recognition

                     Xiaobo Wang, Shuo Wang (JD AI Research)

  • 15:10     Oral 4: Generatively Inferential Co-Training for Unsupervised Domain Adaptation

                  Can Qin*; Lichen Wang; Yulun Zhang; Yun Fu (Northeastern University)

  • 15:20     Oral 5: Unsupervised Outlier Detection in Appearance-Based Gaze Estimation

                  Zhaokang Chen*; Didan Deng; Jimin Pi; Bertram Shi (HKUST)

  • 15:25     Workshop Award Ceremony: Best Paper, Outstanding Paper, Challenge Winner

  • 15:30     Coffee Break and Poster Session II

  • 16:25     ICCV Oral Invited Talk 1: Seeing Motion in the Dark

                  Chen Chen(UIUC); Qifeng Chen*(HKUST); Minh N. Do(UIUC); Vladlen Koltun(Intel)

  • 16:35     ICCV Oral Invited Talk 2: Two-Stream Action Recognition-Oriented Video Super-Resolution

                  Haochen Zhang*Dong LiuZhiwei Xiong (USTC)

  • 16:45     Invited Talk 5: Legal and Ethical Considerations of Real-World Face Recognition

                     Matthew Turk (President, TTIC)

  • 17:30     Panel Discussion: Robustness, Legal, Ethics and Future Trend of Low-Quality Vision

                     Panelists: Matthew Turk, Rama Chellappa, Gang Hua, Bertram Shi, Xiaoming Liu, Dong Liu

                     Moderator: Zhangyang(Atlas) Wang

To Receive Workshop News:
 

Main Track Important Dates

Accepted Paper

Accepted Full Paper (Full Text available)

Morning Session Poster I:  (Until 1pm)

Poster ID - Paper Name - Author List

  1. (#96) Extreme Low Resolution Action Recognition with Spatial-Temporal Multi-Head Self-Attention and Knowledge Distillation (Purwanto, Didik Mr*; Pramono, Rizard Renanda Adhi; Chen, Yie-Tarng; Fang, Wen-Hsien)

  2. (#97) Online Multi-task Clustering for Human Motion Segmentation(Sun, Gan*; Cong, Yang; Wang, Lichen; Ding, Zhengming; Fu, Yun)

  3. (#98) Feature Aggregation Network for Video Face Recognition(Liu, Zhaoxiang*; hu, huan; Bai, Jinqiang; li, shaohua; Lian, Shiguo)

  4. (#99) Recognizing Compressed Videos: Challenges and Promises(Pourreza, Reza*; Ghodrati, Amir; Habibian, Amirhossein)

  5. (#100) Evidence Based Feature Selection and Collaborative Representation Towards Learning Based PSF Estimation for Motion Deblurring(Dhanakshirur, Rohan Raju*; Tabib, Ramesh ; Patil, Ujwala; Mudenagudi, Uma)

  6. (#101) Low Quality Video Face Recognition: Multi-mode Aggregation Recurrent Network (MARN)(Sixue, Gong*; Shi, Yichun; Jain, Anil)

  7. (#102) Non-discriminative data or weak model? On the relative importance of data and model resolution(Sandler, Mark*; Baccash, Jonathan; Zhmoginov, Andrey; Howard, Andrew)

  8. (#103) GSR-MAR: Global Super-Resolution for Person Multi-Attribute Recognition(Siadari, Thomhert S*; Han, Mikyong; Yoon, Hyunjin)

  9. (#104) Image Deconvolution with Deep Image and Kernel Priors(Wang, Zhunxuan*; Wang, Zipei; Li, Qiqi; Bilen, Hakan)

  10. (#105) Indoor Depth Completion with Boundary Consistency and Self-Attention(Huang, Yu-Kai*; Wu, Tsung-Han; Liu, Yueh-Cheng; Hsu, Winston H.)

Afternoon Session Poster II:  (After 1pm)

Poster ID - Paper Name - Author List

  1. (#96) QMUL Surveillance Face Recognition Challenge(Zhiyi Cheng, Qi Dong, Xiatian Zhu, Shaogang Gong)

  2. (#97) Generatively Inferential Co-Training for Unsupervised Domain Adaptation(Qin, Can*; Wang, Lichen; Zhang, Yulun; Fu, Yun)

  3. (#98) Are Adversarial Robustness and Common Perturbation Robustness Independant Attributes ?(Laugros, Alfred*; Caplier, Alice ; Ospici, Matthieu)

  4. (#99) Unsupervised Outlier Detection in Appearance-Based Gaze Estimation(Chen, Zhaokang*; Deng, Didan; Pi, Jimin; Shi, Bertram)

  5. (#100) Unsupervised Deep Feature Transfer for Low Resolution Image Classification(Wu, Yuanwei*; Zhang, Ziming; Wang, Guanghui)

  6. (#101) SNIDER: Single Noisy Image Denoising and Rectification for Improving License Plate Recognition( Lee, Younkwan*; Lee, Juhyun; Ahn, Hoyeon; Jeon, Moongu)

  7. (#102) Intra-Camera Supervised Person Re-Identification: A New Benchmark(zhu, xiangping*; Zhu, Xiatian; Li, Minxian; Murino, Vittorio; Gong, Shaogang)

  8. (#103) State-of-the-Art in Action: Unconstrained Text Detection(Nguyen, Diep T.N.)

  9. (#104) Real-time Age-Invariant Face Recognition in Videos using the ScatterNet Inception Hybrid Network (SIHN)(Bodhe, Saurabh*; Singh, Amarjot; Kapse, Prathamesh P)

  10. (#105) Recognizing Tiny Faces(Mynepalli, Siva Chaitanya*; Hu, Peiyun; Ramanan, Deva)

 

Accepted Abstract

  1. Floating Ground-truth Labels(Kim, Jiman*; Park, Chanjong) [pdf]

  2. Image Super Resolution Techniques Applied on Satellite Imagery(Phetmunee, Chitipat*; Doan, Hien Thi; Vu, Dieu Hoang; Ahn, Donghyun; Cha, Hyunji; Han, Sungwon; Cha, Meeyoung) [pdf]

  3. Is There Tradeoff between Spatial and Temporal in Video Super-Resolution?(Zhang, Haochen; Liu, Dong*; Xiong, Zhiwei) [pdf]

  4. Recognizing Emotions from Out-of-domain Facial Expressions Produced by Non-Actors(Kim, Taehyeong*; Yang, Seung Hee; Ko, Hyunwoong; Cho, Sungjae; Lee, Jun-Young; Zhang, Byoung-Tak) [pdf]

  5. Underwater Image Enhancement via Group-Wise Deep Whitening and Colouring Transforms(Jamadandi, Adarsh*; Desai, Chaitra D; Tabib, Ramesh ; Mudenagudi, Uma) [pdf]

  6. Application of Face Recognition Technology to a History Project(Roman-Basora, Manuel*) [pdf]

  7. Wavelet Pooling for Convolutional Neural Networks using Unitary Gates(Jamadandi, Adarsh*; Mudenagudi, Uma) [pdf]

  8. Framework for Underwater Dataset Generation and Classification Towards Modeling Restoration(Desai, Chaitra D*; Tabib, Ramesh ; Patil, Anisha J; Karanth, Samanvitha U; Jamadandi, Adarsh; Patil, Ujwala; Mudenagudi, Uma) [pdf]

  9. Adaptive Color Correction for Underwater Image Enhancement(Hegde, Deepti B*; Desai, Chaitra D; Tabib, Ramesh ; Mudenagudi, Uma; Bora, Prabin) [pdf]

 

 

Invited Speakers

Rama_New.JPG
Rama_New.JPG

Rama Chellappa

Professor,  University of Maryland

Robust Face Recognition and Verification from Low-Quality Images and Videos

Prof. Rama Chellappa is a Minta Martin Professor in the A.J. Clark School of Engineering at University of Maryland. Before that, he served as the Chair of the Electrical and Computer Engineering Department from 2011-2018. His current research interests are face and gait analysis, secure biometrics, 3D modeling from video, image/video exploitation from stationary and moving platforms, compressive sensing, hyper spectral processing, and commercial applications of image processing and understanding.

Rama_New.JPG
mturk7.png

President of TTIC

Legal and Ethical Considerations of Real-World Face Recognition

Prof. Matthew Turk is the President of TTIC. Prior to joining TTIC in 2019, Turk was a Department Chair from 2017 to 2019 at the University of California, Santa Barbara. Turk’s primary research interests are in computer vision and machine learning, augmented and mixed reality, and human-computer interaction. His famous work Eigenfaces is fairly popular among face recognition algorithms.

ganghua.jpg

VP & Chief Scientist, Wormpex AI Research

Deep Pedestrian Detection across Occlusion with Geometric Context

Dr. Gang Hua is the Vice President and Chief Scientist of Wormpex AI Research. Before that, he served in various roles at Microsoft as the Science/Technical Adviser to the CVP of the Computer Vision Group, Director of Computer Vision Science Team in Redmond and Taipei ATL, and Senior Principal Researcher/Research Manager at Microsoft Research. His research focuses on computer vision, pattern recognition, machine learning, robotics, towards general Artificial Intelligence, with primary applications in cloud and edge intelligence, and currently with a focus on new retail intelligence.

xiaomingLiu2.JPG

Xiaoming Liu

Associate Professor,  Michigan State University

Tackling person identification at a distance: pose, resolution and gait

Prof. Xiaoming Liu is an associate professor at Michigan State University, and the leader of Computer Vision Lab. Prior to joining MSU, he was a research scientist at the Computer Vision Laboratory of GE Global Research. His research interests include computer vision, pattern recognition, machine learning, biometrics, human computer interface, etc.

manmohan .jpg

Manmohan Chandraker

Assistant Professor,  UC San Diego

Topic: Unconstrained Computer Vision

Prof. Manmohan Chandraker is an assistant professor at the CSE department of the University of California, San Diego. He was a postdoctoral scholar at UC Berkeley and leads computer vision research at NEC Labs. His research interests are in computer vision, machine learning and graphics-based vision, with applications to autonomous driving and human-computer interfaces.

 

Organizing Committee

 

Challenge Committee

 

Program Committee

Academic Committee

Dr. C.-C. Jay Kuo, Professor, IEEE Fellow, University of Southern California
Dr. Jiebo Luo, Professor, IEEE Fellow, University of Rochester
Dr. Zhouchen Lin, Professor, IEEE Fellow, Peking University
Dr. Dacheng Tao, Professor, IEEE Fellow, University of Sydney, Australia
Dr. Chia-Wen Lin, Professor, IEEE Fellow, National Tsing Hua University
Dr. Weiyao Lin, Professor, Shanghai Jiao Tong University
Dr. Ruiqin Xiong, Professor, Peking University
Dr. Shuicheng Yan, Associate Professor, IEEE Fellow, National University of Singapore
Dr. Xiaoning Qian, Associate Professor, Texas A&M University
Dr. Dong Liu, Associate Professor, University of Science and Technology of China
Dr. Chen Change Loy, Associate Professor, Nanyang Technological University
Dr. Xiaojie Guo, Associate Professor, Tianjin University
Dr. Jiashi Feng, Assistant Professor, National University of Singapore
Dr. Xinchao Wang, Assistant Professor, Stevens Institute of Technology
Dr. Bihan Wen, Assistant Professor, Nanyang Technological University
Dr. Hien Van Nguyen, Assistant Professor, University of Houston
Dr. Peixi Peng, Assistant Professor, Chinese Academy of Sciences
Dr. Tongliang Liu, Assistant Professor, University of Sydney, Australia
Dr. Kede Ma, Postdoc Researcher, New York University

Mr. Yang Fu, Ph.D Student, UIUC

Mr. Kuangxiao Gu, Ph.D Student, UIUC

Industrial Committee

Dr. Wenjun Zeng, Principal Researcher, IEEE Fellow, Microsoft Research Asia
Dr. Jianchao Yang, Director of AI Lab, ByteDance
Dr. Haichao Zhang, Senior Research Scientist, Baidu Research
Dr. Zhaowen Wang, Research Scientist, Adobe Research
Dr. Yang Yang, Principal Data Scientist, Walmart Tech
Dr. Shaodi You, Senior Research Scientist, Data61, CSIRO, Australia
Dr. Wei Zhang, Senior Research Scientist, JD AI Research
Dr. Yu Li, Senior Engineer, BlackMagic Design

Contact

If you have questions about the paper submission, our schedule or general information, don’t hesitate to reach out via the following emails or the form on the right!

COEX Convention Center
513, Yeongdong-daero, Gangnam-gu
Seoul 06164 Republic of Korea

General Inquiry: chairs@forlq.org

Challenge Inquiry: challenge@forlq.org

 

Call for Papers and Participants

What is the current state-of-the-art for recognition and detection algorithms in non-ideal visual environments? We are organizing the RLQ challenge and workshop in ICCV 2019, with an expanded scope for paper solicitation.  

While the visual recognition research has made tremendous progress in recent years, most models are trained, applied, and evaluated on high-quality (HQ) visual data, such as ImageNet benchmarks. However, in many emerging applications such as robotics and autonomous driving, the performances of visual sensing and analytics are largely jeopardized by low-quality (LQ) visual data acquired from complex unconstrained environments, suffering from various types of degradations such as low resolution, noise, occlusion and motion blur. While some mild degradations may be compromised by sophisticated visual recognition models, their impacts turn much notable as the level of degradations passes some empirical threshold. Other factors, such as contrast, brightness, sharpness, and out-of-focus, all have various negative effects on visual recognition. 

We organize this one-day workshop to provide an integrated forum for researchers to review the recent progress of robust recognition models from LQ visual data, and the novel image restoration algorithms. We embrace the most advanced deep learning systems, meanwhile being open to classical physically grounded models and feature engineering, as well as any well-motivated combination of the two streams.

RLQ 2019 consists of challenge, keynote speech, paper presentation, poster session, special session on privacy and ethics of visual recognition, and a panel discussion from the invited speakers. Specifically, 

Challenge Track

This challenge is led by QMUL. In addition to submitting the results to the online evaluation system, (Mandatory) participants are required to submit codes and fact sheet for evaluation. (Optional) They are also encouraged to submit a workshop paper

  • Validation Phase:  25 June to 30 Aug

  • Testing Phase:  8 Aug to 15 Sep

  • Final Ranks Released:  15 Sep

  • (Challenge-only) Paper Submission Deadline:  15 Aug

  • Notification:  20 Aug

  • Camera-Ready28 Aug

  • Material Submission Deadline:  15 Sep

-Challenge Website: [Link]

-Dataset Download: [Link]

-Inquiry, Code and Fact Sheet Submission: challenge@forlq.org

-Review Process: Double-blind (Please keep anonymous)

-Paper Format (2 to 6 pages excluding reference) follows ICCV2019 Main Conference Guidelines: [link].

-Paper Submission Website: https://cmt3.research.microsoft.com/RLQ2019/Submission/Index

Paper Track

We will solicit full-papers from but not limited to the following topics:

  • Robust recognition and detection from low-resolution image/video

  • Robust recognition and detection from video with motion blur

  • Robust recognition and detection from highly noisy image/video

  • Robust recognition and detection from other unconstrained environment conditions

  • Artificial Degradations 

  • Low-resolution image/video enhancement, especially for recognition purpose

  • Image/video denoising and deblurring, especially for recognition purpose

  • Restoration and enhancement of other common degradations, such as low-illumination, inclement weathers, etc., especially for recognition purpose

  • Novel methods and metrics for image restoration and enhancement algorithms, especially for recognition purpose

  • Surveys of algorithms and applications with LQ inputs in computer vision

  • Psychological and cognitive science research with proper data processing and enhancement

  • Novel calibration and registration methods on gaze or object images etc. for recognition or detection purpose.

  • Novel imperfect low-quality data mining, cleaning, and processing methods for training a recognition system.

  • Other novel applications that robustly handle computer vision tasks with LQ inputs

  • [Special Issues] Legal, privacy and ethics in recognition

The accepted papers will be archived in the IEEE Xplore Digital Library and the CVF Open Access of ICCV2019 Workshop Proceedings, and authors will be invited to present their paper either in oral or poster form. 

-Review Process: Double-blind (Please keep anonymous)

-Paper Format (3 to 8 pages excluding reference) follows ICCV2019 Main Conference Guidelines: [link].

-Paper Submission Website: https://cmt3.research.microsoft.com/RLQ2019/Submission/Index

Abstract Track

We solicit “positioning” writeups,  in the form of short non-archival abstracts. They shall address important issues that may generate a lasting impact for next 5-year research in the field of recognition in low-quality visual data. Examples may include but are not limited to,

  • Proposing novel technical solutions: preliminary works and “half-baked” results are welcome

  • Identifying grand challenges that are traditionally overlooked or under-explored

  • Discussing rising applications where recognition from low-quality visual data might have been a critical bottleneck

  • Raising new research questions that may be motivated by emerging applications

  • New datasets, new benchmark efforts, and/or new evaluation strategies

  • Integration of low-quality visual recognition into other research topics

The accepted abstracts will appear on the website and be discussed during the workshop. The workshop organizers will lead a collective positioning paper, targeted at a top-tier journal such as TPAMI or IJCV. Those whose abstracts are selected will be invited as co-authors of this paper (the author order will be alphabetical).

  • First-wave Submission Deadline:  8 Aug (passed)

  • First-wave Notification:  23 Aug

  • Second-wave Start Date:  30 Aug

  • Submission Deadline:  28 Sep

  • Second-wave Notification:  3 Oct

-Review Process: Single-blind (Do not need to keep anonymous)

-Paper Format (1 to 2 pages excluding reference) follows ICCV2019 Main Conference Guidelines: [link].

-Paper Submission Website: https://cmt3.research.microsoft.com/RLQ2019/Submission/Index

Special Session

We will hold discussions on 'Legal, privacy, and ethics issues of recognition' topic based on the previous experience from researchers, and address public concerns. This session will be led by world-renowned researchers. 

Panel Discussion

We will host the panel discussion among the invited speakers and organizers.