Abstract

Our model uses these object reconstructions as a top-down attentional bias for efficiently routing relevant spatial and feature information of the object. This reconstruction-based attention operates on two levels. First, the model has a long-range projection that inhibits irrelevant spatial regions based on the mask generated from the most likely object reconstruction. Second, the model dynamically changes its feature routing weights through local recurrence, where part-whole connection is modulated based on the reconstruction error for each hypothesized object (represented as a slot). This formulation loosely implements biased-competition theory, where the reconstruction error biases a competition between object slots for the visual parts.

Start Date

13-5-2022 9:25 AM

End Date

13-5-2022 9:50 AM

Share

COinS
 
May 13th, 9:25 AM May 13th, 9:50 AM

Reconstruction-as-Feedback Serves as an Effective Attention Mechanism for Object Recognition and Grouping

Our model uses these object reconstructions as a top-down attentional bias for efficiently routing relevant spatial and feature information of the object. This reconstruction-based attention operates on two levels. First, the model has a long-range projection that inhibits irrelevant spatial regions based on the mask generated from the most likely object reconstruction. Second, the model dynamically changes its feature routing weights through local recurrence, where part-whole connection is modulated based on the reconstruction error for each hypothesized object (represented as a slot). This formulation loosely implements biased-competition theory, where the reconstruction error biases a competition between object slots for the visual parts.