Photo day |
Photo night |
Painting black and white |
Painting color |
That's what we call...
...domain adaptation
One task (classification, segmentation...)
Two datasets
Target − paintings Unlabeled or semi-labeled |
Source − photos Fully labelled |
Calibration (physics, biology...) |
Simulation vs Reality |
Sentiment analysis between different categories |
Adaptation between cameras |
Source | Target | |
---|---|---|
Unsupervised domain adaptation | Fully labeled | Fully unlabeled |
Semi-supervised domain adaptation | Fully labeled | Partially labeled |
Few-shot domain adaptation | Fully labeled | Few samples |
\[ d_{H\Delta H}(X_s, X_t) = 2 \sup_{h,h' \in \mathcal{H}} | \mathbb{E}_{x \sim X_s}[h(x) \neq h'(x)] - \mathbb{E}_{x \sim X_t}[h(x) \neq h'(x)] | \]
Hypothesis 1 |
Hypothesis 2 |
To which extent can we find two hypothesis very similar in one domain, but very different in the other
Definition. Let $S$ be a space of probability distributions.
A divergence $D: S \times S \rightarrow \mathbb{R}$ is a function such that:
(Courtesy Vincent Herrmann)
(Courtesy Vincent Herrmann)
Special case: empirical distribution (uniform distribution on every sample)
(Made with the optimal transport library POT)
$\mathrm{W}(P_r,P_{\theta}) = \inf_{\gamma \in \Pi} \, \sum\limits_{i,j} \Vert x_i - y_j \Vert \gamma (x_i,y_j)$ with $\gamma (x_i,y_j) \in \{0,1\}$
(Made with the optimal transport library POT)
(Made with the optimal transport library POT)
(Made with the optimal transport library POT)
(Made with the optimal transport library POT)
(Courty et al., 2017)
(Made with the optimal transport library POT)
(Made with the optimal transport library POT)
(Made with the optimal transport library POT)
(Made with the optimal transport library POT)
(Made with the optimal transport library POT)
(Made with the optimal transport library POT)
(Made with the optimal transport library POT)
(Made with the optimal transport library POT)
(Made with the optimal transport library POT)
Example of CycleGAN (Zhu et al., 2017)
Toy example: 2D gaussians
Toy example: 2D gaussians
Toy example: 2D gaussians
Toy example: 2D gaussians
Toy example: 2D gaussians
Toy example: 2D gaussians
Toy example: 2D gaussians
Toy example: 2D gaussians
Toy example: 2D gaussians
Toy example: 2D gaussians
Toy example: 2D gaussians
Toy example: 2D gaussians
Toy example: 2D gaussians
Toy example: 2D gaussians
Toy example: 2D gaussians
Toy example: 2D gaussians
Toy example: 2D gaussians
Toy example: 2D gaussians
Toy example: 2D gaussians
Toy example: 2D gaussians
Toy example: 2D gaussians
Results
On the MNIST-SVHN benchmark
SVHN (source) → MNIST (target)
Year | Algorithm | Accuracy |
---|---|---|
2015 | SA | 59.3 |
DANN | 73.8 | |
2016 | DRCN | 82.0 |
DSN | 82.7 | |
DTN | 90.7 | |
2017 | UNIT | 90.5 |
GenToAdapt | 92.4 | |
DA_assoc | 97.6 | |
2018 | DIRT-T | 99.4 |