Case ID: M23-164P^

Published: 2024-02-12 09:53:19

Last Updated: 1707731599


Inventor(s)

Tejas Gokhale
Rushil Anirudh
Jayaraman Thiagarajan
Bhavya Kailkhura
Chitta Baral
Yezhou Yang

Technology categories

Computing & Information TechnologyPhysical Science

Technology keywords

Algorithm Development
Imaging
Machine Learning
Neural Computing
PS-Computing and Information Technology


Licensing Contacts

Physical Sciences Team

Implementing Improved Diversity Using Adversarially Learned Transformations for Domain Generalization

Machine learning models and neural networks in particular can be used in the field of image processing.  Machine learning models are trained to better improve their output quality.  During training, domain generalization is the problem of making accurate predictions on previously unseen domains, especially when these domains are very different from the data distribution on which the model was trained.  Single source domain generalization (SSDG) is where the model has access only to a single training domain, and is expected to generalize to multiple different testing domains.  This is difficult due to the limited information available to train the model with just a single source.  Current methods to train models include using diversity or a diverse set of augmentations during training to improve a model’s robustness under distribution shifts.  While diversity is necessary for SSDG, diversity alone is insufficient – blindly exposing a model to a wide range of transformations may not guarantee greater generalization.  There is a need for carefully designed forms of diversity specifically those that can expose the model to unique and task-dependent transformations with large semantic changes that are otherwise unrealizable with plug-and-play augmentations as before.

Researchers at Arizona State University and Lawrence Livermore National Laboratory have developed a method that improves diversity and maximizes applicability of trained models through greater generalization.  This method uses an adversary neural network to model plausible, yet hard image transformations that fool a classifier.  The method produces adversarially learned image transformations exposing the classifier to a large space of image transformations for superior domain generalization performance.  This method learns image transformations by randomly initializing the adversary network for each batch and optimizing it for a fixed number of steps to maximize classification error.  The classifier is trained by enforcing a consistency between its predictions on the clean and transformed images.  

Method was demonstrated on multiple benchmarks, e.g., PACS, Office-Home, and Digits.  On each benchmark, the method outperformed the state-of-the-art single source domain generalization methods by a significant margin.

Related Publication: Improving Diversity with Adversarially Learned Transformations for Domain Generalization

Potential Applications:

  • For training machine learning models and neural networks for image processing tasks

Benefits and Advantages:

  • Updates a convolutional network to learn plausible image transformations of the source domain that can fool the classifier during training
  • Enforces a consistency constraint on the predictions on clean images and transformed images
  • Can be naturally combined with existing diversity modules like RandConv or AugMix to improve their performance
  • Outperforms all existing techniques, including standard data augmentation methods, on multiple benchmarks as it is able to generate a diverse set of large transformations of the source domain