Pixel Aligned Language Models

1Google Research 2UC San Diego,
(* the work was during a Google internship)

Each word predicted by PixelLLM is aligned with a pixel location.

hover to see the animated trace

Abstract

Large language models have achieved great success in recent years, so as their variants in vision. Existing vision-language models can describe images in natural languages, answer visual-related questions, or perform complex reasoning about the image. However, it is yet unclear how localization tasks, such as word grounding or referring localization, can be performed using large language models. In this work, we aim to develop a vision-language model that can take locations, for example, a set of points or boxes, as either inputs or outputs. When taking locations as inputs, the model performs location-conditioned captioning, which generates captions for the indicated object or region. When generating locations as outputs, our model regresses pixel coordinates for each output word generated by the language model, and thus performs dense word grounding. Our model is pre-trained on the Localized Narrative dataset, which contains pixel-word-aligned captioning from human attention. We show our model can be applied to various location-aware vision-language tasks, including referring localization, location-conditioned captioning, and dense object captioning, archiving state-of-the-art performance on RefCOCO and Visual Genome.

Video

Problem Overview

We propose Pixel-Aligned Language Model (PixelLLM) to equip large language models with localization capability. The model is pre-trained on localized image captioning data, where each word is labeled with a pixel location, to learn the alignment between words and image pixels. PixelLLM can be applied to various localization tasks, for example, location-conditioned captioning when taking location as input, and referring localization when generating locations as outputs.

PixelLLM architecture for pixel-aligned captioning

We first encode the input location prompt (global box prompt in this case) and the input image with the prompt encoder \(\mathcal{P}\) and the image encoder \(\mathcal{V}\) respectively. Then we input the prompt feature \(\mathbf{l}\) and the image feature \(\mathbf{f}\) into the prompt feature extractor to extract location-specific visual feature \(\mathbf{f_l}\). The large language model \(\mathcal{L}\) then auto-regressively predicts the next text tokens conditioned on previous text tokens and the visual feature. We apply a simple MLP layer on the token features before the vocabulary mapping layer of LLM, which predicts the coordinates of each text token. The alignment between the caption and the trace is represented by color gradient .

Qualitative Results

Pixel-Aligned Image Captioning

hover to see the animated trace
Image
in this image i can see a person standing on the ski boards and holding sticks. i can see he is wearing a jacket, cap and a bag. in the background i can see a fence, few buildings, few lights and the sky.
Image
in this image i can see two persons holding surfboards. the person at the back side is wearing black dress and the person at the back is wearing white dress. at the back side i can see trees and rocks. the sky is in blue color.
Image
in this image i can see a person holding a carrot and a animal. in the background i can see a fence, trees, a horse and the sky.
Image
in this image there is a cat sitting in the box. there is a hat on the cat. there are few objects on the table. there are few objects on the table.
Click here for more results on Localized Narratives
Image
in this image we can see three doughnuts on a paper plate. in the background, we can see a machine.
Image
in this image i can see a teddy bear which is brown in color is sitting on the road and i can see a tree trunk and a pole. in the background i can see the road and the sky.
Image
in this image i can see the wash basin, taps, taps, taps, a mirror, a shower, a glass door, a toilet seat, a toilet seat, a tissue roll and the floor.
Image
in this image i can see a cat which is black and brown in color is sitting on the table. i can see a paper, a pen, a pen, a paper and few other objects on the table. in the background i can see a laptop, a monitor, few books, few other objects on the table, few other objects on the table. i can see a wall and few other objects on the wall.
Image
in this image i can see a cat sitting on the chair. i can see few books, a table, few books, a bag and few other objects on the floor. i can see a bed and few clothes on the bed. i can see a bag and few other objects on the floor.
Image
in this image i can see a cat which is in black and white color, at right i can see a table which is in brown color, at left i can see a cable which is in green color, at the background i can see a green color wall.
Image
in this image i can see a train on the railway track. i can see few poles, wires, a building, few trees, few vehicles on the road. i can also see few poles, few wires, few wires and few wires. i can also see few buildings.
Image
in this image i can see a cat which is in black and brown color standing on the car. i can see a black color car and a lamp. in the background i can see few objects on the floor, few objects, few objects on the floor and the wall.

Referring Localization and Segmentation

hover to see the input image
Click here for more results on RefCOCO

Dense Object Caption

hover to see the input image
Click here for more results on Visual Genome

BibTeX


@article{xu2023pixel,
  author    = {Xu, Jiarui and Zhou, Xingyi and Yan, Shen and Gu, Xiuye and Arnab, Anurag and Sun, Chen and Wang, Xiaolong and Schmid, Cordelia},
  title     = {{Pixel Aligned Language Models}},
  journal   = {arXiv preprint arXiv: 2312.09237},
  year      = {2023},
}