Bongard-OpenWorld: Few-Shot Reasoning
for Free-form Visual Concepts in the Real World

1School of Computer Science, Peking University 2National Key Laboratory of General Artificial Intelligence, BIGAI 3School of Intelligence Science and Technology, Peking University 4Institute for Artificial Intelligence, Peking University *Equal contribution (Accepted to ICLR 2024)  Co-corresponding authors

Abstract

We introduce Bongard-OpenWorld, a new benchmark for evaluating real-world few-shot reasoning for machine vision. It originates from the classical Bongard Problems (BPs) : Given two sets of images (positive and negative), the model needs to identify the set that query images belong to by inducing the visual concepts, which is exclusively depicted by images from the positive set. Our benchmark inherits the few-shot concept induction of the original BPs while adding the two novel layers of challenge: 1) open-world free-form concepts, as the visual concepts in Bongard-OpenWorld are unique compositions of terms from an open vocabulary, ranging from object categories to abstract visual attributes and commonsense factual knowledge; 2) real-world images, as opposed to the synthetic diagrams used by many counterparts. In our exploration, Bongard-OpenWorld already imposes a significant challenge to current few-shot reasoning algorithms. We further investigate to which extent the recently introduced Large Language Models (LLMs) and Vision-Language Models (VLMs) can solve our task, by directly probing VLMs, and combining VLMs and LLMs in an interactive reasoning scheme. We even conceived a neuro-symbolic reasoning approach that reconciles LLMs & VLMs with logical reasoning to emulate the human problem-solving process for Bongard Problems. However, none of these approaches manage to close the human-machine gap, as the best learner achieves 64% accuracy while human participants easily reach 91%. We hope Bongard-OpenWorld can help us better understand the limitations of current visual intelligence and facilitate future research on visual agents with stronger few-shot visual reasoning capabilities.

Approaches

We explore four families of approaches: (a) casting Bongard-OpenWorld into a standard ''2-way, 6-shot'' few-shot learning problem and tackling it using state-of-the-art few-shot learners with pretrained image representations; (b) combining an LLM (reasoner) and a VLM (image captioner) in a single round fashion, where the VLM simply caption each Bongard image and send their captions to LLM for solving this problem; (c) extending the method in (b) to multiple rounds, where the LLM will also iteratively probe the VLM for more image details, resulting in more condense information for solving Bongard; (d) neuro-symbolic approach, where a VLM generates the initial captions, then an LLM extracts visual concepts from them. These concepts are subsequently updated through logical operations, leveraging the responses provided by VLM, until the problem is solved. Zoom in for a better view.

Qualitative Results

Here are some remarkable qualitative results that demonstrate the reasoning capabilities exhibited by different approaches. For a better view, please refer to our Paper, compelling quantitative results can be found in the Table 3.

Examples

Here are some examples of Bongard-OpenWorld, if you want to see more, please refer to our Dataset.

BibTeX

@misc{wu2024bongardopenworld,
      title={Bongard-OpenWorld: Few-Shot Reasoning for Free-form Visual Concepts in the Real World}, 
      author={Rujie Wu and Xiaojian Ma and Zhenliang Zhang and Wei Wang and Qing Li and Song-Chun Zhu and Yizhou Wang},
      year={2024},
      eprint={2310.10207},
      archivePrefix={arXiv},
      primaryClass={cs.LG}
}