Gensmo Logo
logo

LookBench

A Live and Holistic Open Benchmark for Fashion Image Retrieval
 

🔔 News

Visitor count Last updated: January 2016

🚀 [2016-01-20]: Initial release of LookBench v2601 with 4 diverse subtasks across AI-generated and real-world scenarios, and a new open-source model GR-Lite! 🌟

🔥 [2016-01-20]: Release of GR-Pro (proprietary) and GR-Lite (open-source) models achieving state-of-the-art performance! 🎉

Introduction

LookBench is a live, holistic, and challenging benchmark for fashion image retrieval in real e-commerce settings. Unlike static benchmarks that are vulnerable to data contamination, LookBench features continuously refreshing samples, diverse retrieval intents across multiple difficulty levels, and attribute-supervised evaluation with over 100 visually grounded properties.

The current release (v2601) comprises approximately 2,300 queries, each evaluated against a carefully curated retrieval corpus of about 60,000 images per task:

Dataset Image Source # Retro Items Difficulty # Queries / Corpus
RealStudioFlat Real studio flat-lay product photos Single Easy 1,011 / 62,226
AIGen-Studio AI-generated lifestyle studio images Single Medium 192 / 59,254
RealStreetLook Real street outfit photos Multi Hard 1,000 / 61,553
AIGen-StreetLook AI-generated street outfit compositions Multi Hard 160 / 58,846

Each evaluation set is assessed using Coarse Recall, Fine Recall, and nDCG at @1, @5, @10, and @20.

Leaderboard

Open-Source Proprietary
Released on 2016-01-20
Real Studio
Real StreetLook
AI-Gen StreetLook
AI-Gen Studio
Model Coarse Recall Fine Recall nDCG
@1 @5 @10 @20 @1 @5 @10 @20 @1 @5 @10 @20

Results of different models on LookBench. The best-performing model in each metric is in-bold, and the second best is underlined.
GR-Pro and GR-Lite are our proprietary and open-source models respectively.

BibTeX


@article{gao2026lookbench,
      title={LookBench: A Live and Holistic Open Benchmark for Fashion Image Retrieval}, 
      author={Chao Gao and Siqiao Xue and Yimin Peng and Jiwen Fu and Tingyi Gu and Shanshan Li and Fan Zhou},
      year={2026},
      url={https://arxiv.org/abs/2601.14706}, 
      journal={arXiv preprint arXiv:2601.14706},
}
    

Visitor Map