Impact Statement:Subpopulation shift issues arise when a network trained on one set of sub-populations of labels (such as breeds of dogs) does not generalize to unseen sub-populations und...Show More
Abstract:
Over the past decade, deep neural networks have proven to be adept in image classification tasks, often surpassing humans in terms of accuracy. However, standard neural n...Show MoreMetadata
Impact Statement:
Subpopulation shift issues arise when a network trained on one set of sub-populations of labels (such as breeds of dogs) does not generalize to unseen sub-populations under the same label. This misclassification can grow catastrophic in critical decision-making scenarios such as autonomous driving. We propose to tackle this by learning richer representations built up hierarchically in a tree form, from top level concepts down to refining concepts, such as labeling `dog? as a mammal followed by a carnivore, and lastly a dog. We introduce a conditional learning framework that allows hierarchical classification along with collaboration at each level. Representing labels as taxonomical hierarchies also allows us to measure the impact of mispredictions as the graphical distance between the true and predicted classes. We show that embedding hierarchical structure knowledge into the learned representation results in both better accuracy and lesser catastrophic misprediction rates on many subp...
Abstract:
Over the past decade, deep neural networks have proven to be adept in image classification tasks, often surpassing humans in terms of accuracy. However, standard neural networks often fail to understand the concept of hierarchical structures and dependencies among different classes for vision-related tasks. Humans on the other hand, seem to intuitively learn categories conceptually, progressively growing from understanding high-level concepts down to granular levels of categories. One of the issues arising from the inability of neural networks to encode such dependencies within its learned structure is that of subpopulation shift—where models are queried with novel unseen classes taken from a shifted population of the training set categories. Since the neural network treats each class as independent from all others, it struggles to categorize shifting populations that are dependent at higher levels of the hierarchy. In this work, we study the aforementioned problems through the lens of...
Published in: IEEE Transactions on Artificial Intelligence ( Volume: 5, Issue: 2, February 2024)