title
stringlengths 15
153
| authors
stringlengths 6
328
| abstract
stringlengths 0
2.42k
| url
stringlengths 97
97
| detail_url
stringlengths 97
97
| abs
stringclasses 1
value | OpenReview
stringclasses 1
value | Download PDF
stringclasses 1
value | tags
stringclasses 1
value |
|---|---|---|---|---|---|---|---|---|
Tight High Probability Bounds for Linear Stochastic Approximation with Fixed Stepsize
|
Alain Durmus, Eric Moulines, Alexey Naumov, Sergey Samsonov, Kevin Scaman, Hoi-To Wai
|
This paper provides a non-asymptotic analysis of linear stochastic approximation (LSA) algorithms with fixed stepsize. This family of methods arises in many machine learning tasks and is used to obtain approximate solutions of a linear system $\bar{A}\theta = \bar{b}$ for which $\bar{A}$ and $\bar{b}$ can only be accessed through random estimates $\{({\bf A}_n, {\bf b}_n): n \in \mathbb{N}^*\}$. Our analysis is based on new results regarding moments and high probability bounds for products of matrices which are shown to be tight. We derive high probability bounds on the performance of LSA under weaker conditions on the sequence $\{({\bf A}_n, {\bf b}_n): n \in \mathbb{N}^*\}$ than previous works. However, in contrast, we establish polynomial concentration bounds with order depending on the stepsize. We show that our conclusions cannot be improved without additional assumptions on the sequence of random matrices $\{{\bf A}_n: n \in \mathbb{N}^*\}$, and in particular that no Gaussian or exponential high probability bounds can hold. Finally, we pay a particular attention to establishing bounds with sharp order with respect to the number of iterations and the stepsize and whose leading terms contain the covariance matrices appearing in the central limit theorems.
|
https://papers.nips.cc/paper_files/paper/2021/hash/fc95fa5740ba01a870cfa52f671fe1e4-Abstract.html
|
https://papers.nips.cc/paper_files/paper/2021/hash/fc95fa5740ba01a870cfa52f671fe1e4-Abstract.html
|
NIPS 2021
|
|||
Learning Large Neighborhood Search Policy for Integer Programming
|
Yaoxin Wu, Wen Song, Zhiguang Cao, Jie Zhang
|
We propose a deep reinforcement learning (RL) method to learn large neighborhood search (LNS) policy for integer programming (IP). The RL policy is trained as the destroy operator to select a subset of variables at each step, which is reoptimized by an IP solver as the repair operator. However, the combinatorial number of variable subsets prevents direct application of typical RL algorithms. To tackle this challenge, we represent all subsets by factorizing them into binary decisions on each variable. We then design a neural network to learn policies for each variable in parallel, trained by a customized actor-critic algorithm. We evaluate the proposed method on four representative IP problems. Results show that it can find better solutions than SCIP in much less time, and significantly outperform other LNS baselines with the same runtime. Moreover, these advantages notably persist when the policies generalize to larger problems. Further experiments with Gurobi also reveal that our method can outperform this state-of-the-art commercial solver within the same time limit.
|
https://papers.nips.cc/paper_files/paper/2021/hash/fc9e62695def29ccdb9eb3fed5b4c8c8-Abstract.html
|
https://papers.nips.cc/paper_files/paper/2021/hash/fc9e62695def29ccdb9eb3fed5b4c8c8-Abstract.html
|
NIPS 2021
|
|||
Dynamic Trace Estimation
|
Prathamesh Dharangutte, Christopher Musco
|
We study a dynamic version of the implicit trace estimation problem. Given access to an oracle for computing matrix-vector multiplications with a dynamically changing matrix A, our goal is to maintain an accurate approximation to A's trace using as few multiplications as possible. We present a practical algorithm for solving this problem and prove that, in a natural setting, its complexity is quadratically better than the standard solution of repeatedly applying Hutchinson's stochastic trace estimator. We also provide an improved algorithm assuming additional common assumptions on A's dynamic updates. We support our theory with empirical results, showing significant computational improvements on three applications in machine learning and network science: tracking moments of the Hessian spectral density during neural network optimization, counting triangles and estimating natural connectivity in a dynamically changing graph.
|
https://papers.nips.cc/paper_files/paper/2021/hash/fcdf698a5d673435e0a5a6f9ffea05ca-Abstract.html
|
https://papers.nips.cc/paper_files/paper/2021/hash/fcdf698a5d673435e0a5a6f9ffea05ca-Abstract.html
|
NIPS 2021
|
|||
Provable Representation Learning for Imitation with Contrastive Fourier Features
|
Ofir Nachum, Mengjiao Yang
|
In imitation learning, it is common to learn a behavior policy to match an unknown target policy via max-likelihood training on a collected set of target demonstrations. In this work, we consider using offline experience datasets -- potentially far from the target distribution -- to learn low-dimensional state representations that provably accelerate the sample-efficiency of downstream imitation learning. A central challenge in this setting is that the unknown target policy itself may not exhibit low-dimensional behavior, and so there is a potential for the representation learning objective to alias states in which the target policy acts differently. Circumventing this challenge, we derive a representation learning objective that provides an upper bound on the performance difference between the target policy and a low-dimensional policy trained with max-likelihood, and this bound is tight regardless of whether the target policy itself exhibits low-dimensional structure. Moving to the practicality of our method, we show that our objective can be implemented as contrastive learning, in which the transition dynamics are approximated by either an implicit energy-based model or, in some special cases, an implicit linear model with representations given by random Fourier features. Experiments on both tabular environments and high-dimensional Atari games provide quantitative evidence for the practical benefits of our proposed objective.
|
https://papers.nips.cc/paper_files/paper/2021/hash/fd00d3474e495e7b6d5f9f575b2d7ec4-Abstract.html
|
https://papers.nips.cc/paper_files/paper/2021/hash/fd00d3474e495e7b6d5f9f575b2d7ec4-Abstract.html
|
NIPS 2021
|
|||
MICo: Improved representations via sampling-based state similarity for Markov decision processes
|
Pablo Samuel Castro, Tyler Kastner, Prakash Panangaden, Mark Rowland
|
We present a new behavioural distance over the state space of a Markov decision process, and demonstrate the use of this distance as an effective means of shaping the learnt representations of deep reinforcement learning agents. While existing notions of state similarity are typically difficult to learn at scale due to high computational cost and lack of sample-based algorithms, our newly-proposed distance addresses both of these issues. In addition to providing detailed theoretical analyses, we provide empirical evidence that learning this distance alongside the value function yields structured and informative representations, including strong results on the Arcade Learning Environment benchmark.
|
https://papers.nips.cc/paper_files/paper/2021/hash/fd06b8ea02fe5b1c2496fe1700e9d16c-Abstract.html
|
https://papers.nips.cc/paper_files/paper/2021/hash/fd06b8ea02fe5b1c2496fe1700e9d16c-Abstract.html
|
NIPS 2021
|
|||
Counterfactual Explanations in Sequential Decision Making Under Uncertainty
|
Stratis Tsirtsis, Abir De, Manuel Rodriguez
|
Methods to find counterfactual explanations have predominantly focused on one-step decision making processes. In this work, we initiate the development of methods to find counterfactual explanations for decision making processes in which multiple, dependent actions are taken sequentially over time. We start by formally characterizing a sequence of actions and states using finite horizon Markov decision processes and the Gumbel-Max structural causal model. Building upon this characterization, we formally state the problem of finding counterfactual explanations for sequential decision making processes. In our problem formulation, the counterfactual explanation specifies an alternative sequence of actions differing in at most k actions from the observed sequence that could have led the observed process realization to a better outcome. Then, we introduce a polynomial time algorithm based on dynamic programming to build a counterfactual policy that is guaranteed to always provide the optimal counterfactual explanation on every possible realization of the counterfactual environment dynamics. We validate our algorithm using both synthetic and real data from cognitive behavioral therapy and show that the counterfactual explanations our algorithm finds can provide valuable insights to enhance sequential decision making under uncertainty.
|
https://papers.nips.cc/paper_files/paper/2021/hash/fd0a5a5e367a0955d81278062ef37429-Abstract.html
|
https://papers.nips.cc/paper_files/paper/2021/hash/fd0a5a5e367a0955d81278062ef37429-Abstract.html
|
NIPS 2021
|
|||
Streaming Linear System Identification with Reverse Experience Replay
|
Suhas Kowshik, Dheeraj Nagaraj, Prateek Jain, Praneeth Netrapalli
|
We consider the problem of estimating a linear time-invariant (LTI) dynamical system from a single trajectory via streaming algorithms, which is encountered in several applications including reinforcement learning (RL) and time-series analysis. While the LTI system estimation problem is well-studied in the {\em offline} setting, the practically important streaming/online setting has received little attention. Standard streaming methods like stochastic gradient descent (SGD) are unlikely to work since streaming points can be highly correlated. In this work, we propose a novel streaming algorithm, SGD with Reverse Experience Replay (SGD-RER), that is inspired by the experience replay (ER) technique popular in the RL literature. SGD-RER divides data into small buffers and runs SGD backwards on the data stored in the individual buffers. We show that this algorithm exactly deconstructs the dependency structure and obtains information theoretically optimal guarantees for both parameter error and prediction error. Thus, we provide the first -- to the best of our knowledge -- optimal SGD-style algorithm for the classical problem of linear system identification with a first order oracle. Furthermore, SGD-RER can be applied to more general settings like sparse LTI identification with known sparsity pattern, and non-linear dynamical systems. Our work demonstrates that the knowledge of data dependency structure can aid us in designing statistically and computationally efficient algorithms which can ``decorrelate'' streaming samples.
|
https://papers.nips.cc/paper_files/paper/2021/hash/fd2c5e4680d9a01dba3aada5ece22270-Abstract.html
|
https://papers.nips.cc/paper_files/paper/2021/hash/fd2c5e4680d9a01dba3aada5ece22270-Abstract.html
|
NIPS 2021
|
|||
SmoothMix: Training Confidence-calibrated Smoothed Classifiers for Certified Robustness
|
Jongheon Jeong, Sejun Park, Minkyu Kim, Heung-Chang Lee, Do-Guk Kim, Jinwoo Shin
|
Randomized smoothing is currently a state-of-the-art method to construct a certifiably robust classifier from neural networks against $\ell_2$-adversarial perturbations. Under the paradigm, the robustness of a classifier is aligned with the prediction confidence, i.e., the higher confidence from a smoothed classifier implies the better robustness. This motivates us to rethink the fundamental trade-off between accuracy and robustness in terms of calibrating confidences of a smoothed classifier. In this paper, we propose a simple training scheme, coined SmoothMix, to control the robustness of smoothed classifiers via self-mixup: it trains on convex combinations of samples along the direction of adversarial perturbation for each input. The proposed procedure effectively identifies over-confident, near off-class samples as a cause of limited robustness in case of smoothed classifiers, and offers an intuitive way to adaptively set a new decision boundary between these samples for better robustness. Our experimental results demonstrate that the proposed method can significantly improve the certified $\ell_2$-robustness of smoothed classifiers compared to existing state-of-the-art robust training methods.
|
https://papers.nips.cc/paper_files/paper/2021/hash/fd45ebc1e1d76bc1fe0ba933e60e9957-Abstract.html
|
https://papers.nips.cc/paper_files/paper/2021/hash/fd45ebc1e1d76bc1fe0ba933e60e9957-Abstract.html
|
NIPS 2021
|
|||
Action-guided 3D Human Motion Prediction
|
Jiangxin Sun, Zihang Lin, Xintong Han, Jian-Fang Hu, Jia Xu, Wei-Shi Zheng
|
The ability of forecasting future human motion is important for human-machine interaction systems to understand human behaviors and make interaction. In this work, we focus on developing models to predict future human motion from past observed video frames. Motivated by the observation that human motion is closely related to the action being performed, we propose to explore action context to guide motion prediction. Specifically, we construct an action-specific memory bank to store representative motion dynamics for each action category, and design a query-read process to retrieve some motion dynamics from the memory bank. The retrieved dynamics are consistent with the action depicted in the observed video frames and serve as a strong prior knowledge to guide motion prediction. We further formulate an action constraint loss to ensure the global semantic consistency of the predicted motion. Extensive experiments demonstrate the effectiveness of the proposed approach, and we achieve state-of-the-art performance on 3D human motion prediction.
|
https://papers.nips.cc/paper_files/paper/2021/hash/fd9dd764a6f1d73f4340d570804eacc4-Abstract.html
|
https://papers.nips.cc/paper_files/paper/2021/hash/fd9dd764a6f1d73f4340d570804eacc4-Abstract.html
|
NIPS 2021
|
|||
Meta-Learning the Search Distribution of Black-Box Random Search Based Adversarial Attacks
|
Maksym Yatsura, Jan Metzen, Matthias Hein
|
Adversarial attacks based on randomized search schemes have obtained state-of-the-art results in black-box robustness evaluation recently. However, as we demonstrate in this work, their efficiency in different query budget regimes depends on manual design and heuristic tuning of the underlying proposal distributions. We study how this issue can be addressed by adapting the proposal distribution online based on the information obtained during the attack. We consider Square Attack, which is a state-of-the-art score-based black-box attack, and demonstrate how its performance can be improved by a learned controller that adjusts the parameters of the proposal distribution online during the attack. We train the controller using gradient-based end-to-end training on a CIFAR10 model with white box access. We demonstrate that plugging the learned controller into the attack consistently improves its black-box robustness estimate in different query regimes by up to 20% for a wide range of different models with black-box access. We further show that the learned adaptation principle transfers well to the other data distributions such as CIFAR100 or ImageNet and to the targeted attack setting.
|
https://papers.nips.cc/paper_files/paper/2021/hash/fdb55ce855129e05da8374059cc82728-Abstract.html
|
https://papers.nips.cc/paper_files/paper/2021/hash/fdb55ce855129e05da8374059cc82728-Abstract.html
|
NIPS 2021
|
|||
Validating the Lottery Ticket Hypothesis with Inertial Manifold Theory
|
Zeru Zhang, Jiayin Jin, Zijie Zhang, Yang Zhou, Xin Zhao, Jiaxiang Ren, Ji Liu, Lingfei Wu, Ruoming Jin, Dejing Dou
|
Despite achieving remarkable efficiency, traditional network pruning techniques often follow manually-crafted heuristics to generate pruned sparse networks. Such heuristic pruning strategies are hard to guarantee that the pruned networks achieve test accuracy comparable to the original dense ones. Recent works have empirically identified and verified the Lottery Ticket Hypothesis (LTH): a randomly-initialized dense neural network contains an extremely sparse subnetwork, which can be trained to achieve similar accuracy to the former. Due to the lack of theoretical evidence, they often need to run multiple rounds of expensive training and pruning over the original large networks to discover the sparse subnetworks with low accuracy loss. By leveraging dynamical systems theory and inertial manifold theory, this work theoretically verifies the validity of the LTH. We explore the possibility of theoretically lossless pruning as well as one-time pruning, compared with existing neural network pruning and LTH techniques. We reformulate the neural network optimization problem as a gradient dynamical system and reduce this high-dimensional system onto inertial manifolds to obtain a low-dimensional system regarding pruned subnetworks. We demonstrate the precondition and existence of pruned subnetworks and prune the original networks in terms of the gap in their spectrum that make the subnetworks have the smallest dimensions.
|
https://papers.nips.cc/paper_files/paper/2021/hash/fdc42b6b0ee16a2f866281508ef56730-Abstract.html
|
https://papers.nips.cc/paper_files/paper/2021/hash/fdc42b6b0ee16a2f866281508ef56730-Abstract.html
|
NIPS 2021
|
|||
Are My Deep Learning Systems Fair? An Empirical Study of Fixed-Seed Training
|
Shangshu Qian, Viet Hung Pham, Thibaud Lutellier, Zeou Hu, Jungwon Kim, Lin Tan, Yaoliang Yu, Jiahao Chen, Sameena Shah
|
Deep learning (DL) systems have been gaining popularity in critical tasks such as credit evaluation and crime prediction. Such systems demand fairness. Recent work shows that DL software implementations introduce variance: identical DL training runs (i.e., identical network, data, configuration, software, and hardware) with a fixed seed produce different models. Such variance could make DL models and networks violate fairness compliance laws, resulting in negative social impact. In this paper, we conduct the first empirical study to quantify the impact of software implementation on the fairness and its variance of DL systems. Our study of 22 mitigation techniques and five baselines reveals up to 12.6% fairness variance across identical training runs with identical seeds. In addition, most debiasing algorithms have a negative impact on the model such as reducing model accuracy, increasing fairness variance, or increasing accuracy variance. Our literature survey shows that while fairness is gaining popularity in artificial intelligence (AI) related conferences, only 34.4% of the papers use multiple identical training runs to evaluate their approach, raising concerns about their results’ validity. We call for better fairness evaluation and testing protocols to improve fairness and fairness variance of DL systems as well as DL research validity and reproducibility at large.
|
https://papers.nips.cc/paper_files/paper/2021/hash/fdda6e957f1e5ee2f3b311fe4f145ae1-Abstract.html
|
https://papers.nips.cc/paper_files/paper/2021/hash/fdda6e957f1e5ee2f3b311fe4f145ae1-Abstract.html
|
NIPS 2021
|
|||
Rectangular Flows for Manifold Learning
|
Anthony L. Caterini, Gabriel Loaiza-Ganem, Geoff Pleiss, John P. Cunningham
|
Normalizing flows are invertible neural networks with tractable change-of-volume terms, which allow optimization of their parameters to be efficiently performed via maximum likelihood. However, data of interest are typically assumed to live in some (often unknown) low-dimensional manifold embedded in a high-dimensional ambient space. The result is a modelling mismatch since -- by construction -- the invertibility requirement implies high-dimensional support of the learned distribution. Injective flows, mappings from low- to high-dimensional spaces, aim to fix this discrepancy by learning distributions on manifolds, but the resulting volume-change term becomes more challenging to evaluate. Current approaches either avoid computing this term entirely using various heuristics, or assume the manifold is known beforehand and therefore are not widely applicable. Instead, we propose two methods to tractably calculate the gradient of this term with respect to the parameters of the model, relying on careful use of automatic differentiation and techniques from numerical linear algebra. Both approaches perform end-to-end nonlinear manifold learning and density estimation for data projected onto this manifold. We study the trade-offs between our proposed methods, empirically verify that we outperform approaches ignoring the volume-change term by more accurately learning manifolds and the corresponding distributions on them, and show promising results on out-of-distribution detection. Our code is available at https://github.com/layer6ai-labs/rectangular-flows.
|
https://papers.nips.cc/paper_files/paper/2021/hash/fde9264cf376fffe2ee4ddf4a988880d-Abstract.html
|
https://papers.nips.cc/paper_files/paper/2021/hash/fde9264cf376fffe2ee4ddf4a988880d-Abstract.html
|
NIPS 2021
|
|||
On the Generative Utility of Cyclic Conditionals
|
Chang Liu, Haoyue Tang, Tao Qin, Jintao Wang, Tie-Yan Liu
|
We study whether and how can we model a joint distribution $p(x,z)$ using two conditional models $p(x|z)$ and $q(z|x)$ that form a cycle. This is motivated by the observation that deep generative models, in addition to a likelihood model $p(x|z)$, often also use an inference model $q(z|x)$ for extracting representation, but they rely on a usually uninformative prior distribution $p(z)$ to define a joint distribution, which may render problems like posterior collapse and manifold mismatch. To explore the possibility to model a joint distribution using only $p(x|z)$ and $q(z|x)$, we study their compatibility and determinacy, corresponding to the existence and uniqueness of a joint distribution whose conditional distributions coincide with them. We develop a general theory for operable equivalence criteria for compatibility, and sufficient conditions for determinacy. Based on the theory, we propose a novel generative modeling framework CyGen that only uses the two cyclic conditional models. We develop methods to achieve compatibility and determinacy, and to use the conditional models to fit and generate data. With the prior constraint removed, CyGen better fits data and captures more representative features, supported by both synthetic and real-world experiments.
|
https://papers.nips.cc/paper_files/paper/2021/hash/fe04e05fbe48920b8ba90bea2ddfe60b-Abstract.html
|
https://papers.nips.cc/paper_files/paper/2021/hash/fe04e05fbe48920b8ba90bea2ddfe60b-Abstract.html
|
NIPS 2021
|
|||
Structural Credit Assignment in Neural Networks using Reinforcement Learning
|
Dhawal Gupta, Gabor Mihucz, Matthew Schlegel, James Kostas, Philip S. Thomas, Martha White
|
Structural credit assignment in neural networks is a long-standing problem, with a variety of alternatives to backpropagation proposed to allow for local training of nodes. One of the early strategies was to treat each node as an agent and use a reinforcement learning method called REINFORCE to update each node locally with only a global reward signal. In this work, we revisit this approach and investigate if we can leverage other reinforcement learning approaches to improve learning. We first formalize training a neural network as a finite-horizon reinforcement learning problem and discuss how this facilitates using ideas from reinforcement learning like off-policy learning. We show that the standard on-policy REINFORCE approach, even with a variety of variance reduction approaches, learns suboptimal solutions. We introduce an off-policy approach, to facilitate reasoning about the greedy action for other agents and help overcome stochasticity in other agents. We conclude by showing that these networks of agents can be more robust to correlated samples when learning online.
|
https://papers.nips.cc/paper_files/paper/2021/hash/fe1f9c70bdf347497e1a01b6c486bdb9-Abstract.html
|
https://papers.nips.cc/paper_files/paper/2021/hash/fe1f9c70bdf347497e1a01b6c486bdb9-Abstract.html
|
NIPS 2021
|
|||
A Near-Optimal Algorithm for Stochastic Bilevel Optimization via Double-Momentum
|
Prashant Khanduri, Siliang Zeng, Mingyi Hong, Hoi-To Wai, Zhaoran Wang, Zhuoran Yang
|
This paper proposes a new algorithm -- the \underline{S}ingle-timescale Do\underline{u}ble-momentum \underline{St}ochastic \underline{A}pprox\underline{i}matio\underline{n} (SUSTAIN) -- for tackling stochastic unconstrained bilevel optimization problems. We focus on bilevel problems where the lower level subproblem is strongly-convex and the upper level objective function is smooth. Unlike prior works which rely on \emph{two-timescale} or \emph{double loop} techniques, we design a stochastic momentum-assisted gradient estimator for both the upper and lower level updates. The latter allows us to control the error in the stochastic gradient updates due to inaccurate solution to both subproblems. If the upper objective function is smooth but possibly non-convex, we show that {SUSTAIN}~requires $O(\epsilon^{-3/2})$ iterations (each using $O(1)$ samples) to find an $\epsilon$-stationary solution. The $\epsilon$-stationary solution is defined as the point whose squared norm of the gradient of the outer function is less than or equal to $\epsilon$. The total number of stochastic gradient samples required for the upper and lower level objective functions matches the best-known complexity for single-level stochastic gradient algorithms. We also analyze the case when the upper level objective function is strongly-convex.
|
https://papers.nips.cc/paper_files/paper/2021/hash/fe2b421b8b5f0e7c355ace66a9fe0206-Abstract.html
|
https://papers.nips.cc/paper_files/paper/2021/hash/fe2b421b8b5f0e7c355ace66a9fe0206-Abstract.html
|
NIPS 2021
|
|||
Generalized Jensen-Shannon Divergence Loss for Learning with Noisy Labels
|
Erik Englesson, Hossein Azizpour
|
Prior works have found it beneficial to combine provably noise-robust loss functions e.g., mean absolute error (MAE) with standard categorical loss function e.g. cross entropy (CE) to improve their learnability. Here, we propose to use Jensen-Shannon divergence as a noise-robust loss function and show that it interestingly interpolate between CE and MAE with a controllable mixing parameter. Furthermore, we make a crucial observation that CE exhibit lower consistency around noisy data points. Based on this observation, we adopt a generalized version of the Jensen-Shannon divergence for multiple distributions to encourage consistency around data points. Using this loss function, we show state-of-the-art results on both synthetic (CIFAR), and real-world (e.g., WebVision) noise with varying noise rates.
|
https://papers.nips.cc/paper_files/paper/2021/hash/fe2d010308a6b3799a3d9c728ee74244-Abstract.html
|
https://papers.nips.cc/paper_files/paper/2021/hash/fe2d010308a6b3799a3d9c728ee74244-Abstract.html
|
NIPS 2021
|
|||
Continual Learning via Local Module Composition
|
Oleksiy Ostapenko, Pau Rodriguez, Massimo Caccia, Laurent Charlin
|
Modularity is a compelling solution to continual learning (CL), the problem of modeling sequences of related tasks. Learning and then composing modules to solve different tasks provides an abstraction to address the principal challenges of CL including catastrophic forgetting, backward and forward transfer across tasks, and sub-linear model growth. We introduce local module composition (LMC), an approach to modular CL where each module is provided a local structural component that estimates a module’s relevance to the input. Dynamic module composition is performed layer-wise based on local relevance scores. We demonstrate that agnosticity to task identities (IDs) arises from (local) structural learning that is module-specific as opposed to the task- and/or model-specific as in previous works, making LMC applicable to more CL settings compared to previous works. In addition, LMC also tracks statistics about the input distribution and adds new modules when outlier samples are detected. In the first set of experiments, LMC performs favorably compared to existing methods on the recent Continual Transfer-learning Benchmark without requiring task identities. In another study, we show that the locality of structural learning allows LMC to interpolate to related but unseen tasks (OOD), as well as to compose modular networks trained independently on different task sequences into a third modular network without any fine-tuning. Finally, in search for limitations of LMC we study it on more challenging sequences of 30 and 100 tasks, demonstrating that local module selection becomes much more challenging in presence of a large number of candidate modules. In this setting best performing LMC spawns much fewer modules compared to an oracle based baseline, however, it reaches a lower overall accuracy. The codebase is available under https://github.com/oleksost/LMC.
|
https://papers.nips.cc/paper_files/paper/2021/hash/fe5e7cb609bdbe6d62449d61849c38b0-Abstract.html
|
https://papers.nips.cc/paper_files/paper/2021/hash/fe5e7cb609bdbe6d62449d61849c38b0-Abstract.html
|
NIPS 2021
|
|||
Model-Based Episodic Memory Induces Dynamic Hybrid Controls
|
Hung Le, Thommen Karimpanal George, Majid Abdolshah, Truyen Tran, Svetha Venkatesh
|
Episodic control enables sample efficiency in reinforcement learning by recalling past experiences from an episodic memory. We propose a new model-based episodic memory of trajectories addressing current limitations of episodic control. Our memory estimates trajectory values, guiding the agent towards good policies. Built upon the memory, we construct a complementary learning model via a dynamic hybrid control unifying model-based, episodic and habitual learning into a single architecture. Experiments demonstrate that our model allows significantly faster and better learning than other strong reinforcement learning agents across a variety of environments including stochastic and non-Markovian settings.
|
https://papers.nips.cc/paper_files/paper/2021/hash/fe73f687e5bc5280214e0486b273a5f9-Abstract.html
|
https://papers.nips.cc/paper_files/paper/2021/hash/fe73f687e5bc5280214e0486b273a5f9-Abstract.html
|
NIPS 2021
|
|||
FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization
|
Quoc Tran Dinh, Nhan H Pham, Dzung Phan, Lam Nguyen
|
We develop two new algorithms, called, FedDR and asyncFedDR, for solving a fundamental nonconvex composite optimization problem in federated learning. Our algorithms rely on a novel combination between a nonconvex Douglas-Rachford splitting method, randomized block-coordinate strategies, and asynchronous im- plementation. They can also handle convex regularizers. Unlike recent methods in the literature, e.g., FedSplit and FedPD, our algorithms update only a subset of users at each communication round, and possibly in an asynchronous manner, making them more practical. These new algorithms can handle statistical and sys- tem heterogeneity, which are the two main challenges in federated learning, while achieving the best known communication complexity. In fact, our new algorithms match the communication complexity lower bound up to a constant factor under standard assumptions. Our numerical experiments illustrate the advantages of our methods over existing algorithms on synthetic and real datasets.
|
https://papers.nips.cc/paper_files/paper/2021/hash/fe7ee8fc1959cc7214fa21c4840dff0a-Abstract.html
|
https://papers.nips.cc/paper_files/paper/2021/hash/fe7ee8fc1959cc7214fa21c4840dff0a-Abstract.html
|
NIPS 2021
|
|||
Adversarial Examples Make Strong Poisons
|
Liam Fowl, Micah Goldblum, Ping-yeh Chiang, Jonas Geiping, Wojciech Czaja, Tom Goldstein
|
The adversarial machine learning literature is largely partitioned into evasion attacks on testing data and poisoning attacks on training data. In this work, we show that adversarial examples, originally intended for attacking pre-trained models, are even more effective for data poisoning than recent methods designed specifically for poisoning. In fact, adversarial examples with labels re-assigned by the crafting network remain effective for training, suggesting that adversarial examples contain useful semantic content, just with the "wrong" labels (according to a network, but not a human). Our method, adversarial poisoning, is substantially more effective than existing poisoning methods for secure dataset release, and we release a poisoned version of ImageNet, ImageNet-P, to encourage research into the strength of this form of data obfuscation.
|
https://papers.nips.cc/paper_files/paper/2021/hash/fe87435d12ef7642af67d9bc82a8b3cd-Abstract.html
|
https://papers.nips.cc/paper_files/paper/2021/hash/fe87435d12ef7642af67d9bc82a8b3cd-Abstract.html
|
NIPS 2021
|
|||
Coresets for Decision Trees of Signals
|
Ibrahim Jubran, Ernesto Evgeniy Sanches Shayda, Ilan I Newman, Dan Feldman
|
A $k$-decision tree $t$ (or $k$-tree) is a recursive partition of a matrix (2D-signal) into $k\geq 1$ block matrices (axis-parallel rectangles, leaves) where each rectangle is assigned a real label. Its regression or classification loss to a given matrix $D$ of $N$ entries (labels) is the sum of squared differences over every label in $D$ and its assigned label by $t$.Given an error parameter $\varepsilon\in(0,1)$, a $(k,\varepsilon)$-coreset $C$ of $D$ is a small summarization that provably approximates this loss to \emph{every} such tree, up to a multiplicative factor of $1\pm\varepsilon$. In particular, the optimal $k$-tree of $C$ is a $(1+\varepsilon)$-approximation to the optimal $k$-tree of $D$.We provide the first algorithm that outputs such a $(k,\varepsilon)$-coreset for \emph{every} such matrix $D$. The size $|C|$ of the coreset is polynomial in $k\log(N)/\varepsilon$, and its construction takes $O(Nk)$ time.This is by forging a link between decision trees from machine learning -- to partition trees in computational geometry. Experimental results on \texttt{sklearn} and \texttt{lightGBM} show that applying our coresets on real-world data-sets boosts the computation time of random forests and their parameter tuning by up to x$10$, while keeping similar accuracy. Full open source code is provided.
|
https://papers.nips.cc/paper_files/paper/2021/hash/fea9c11c4ad9a395a636ed944a28b51a-Abstract.html
|
https://papers.nips.cc/paper_files/paper/2021/hash/fea9c11c4ad9a395a636ed944a28b51a-Abstract.html
|
NIPS 2021
|
|||
Local plasticity rules can learn deep representations using self-supervised contrastive predictions
|
Bernd Illing, Jean Ventura, Guillaume Bellec, Wulfram Gerstner
|
Learning in the brain is poorly understood and learning rules that respect biological constraints, yet yield deep hierarchical representations, are still unknown. Here, we propose a learning rule that takes inspiration from neuroscience and recent advances in self-supervised deep learning. Learning minimizes a simple layer-specific loss function and does not need to back-propagate error signals within or between layers. Instead, weight updates follow a local, Hebbian, learning rule that only depends on pre- and post-synaptic neuronal activity, predictive dendritic input and widely broadcasted modulation factors which are identical for large groups of neurons. The learning rule applies contrastive predictive learning to a causal, biological setting using saccades (i.e. rapid shifts in gaze direction). We find that networks trained with this self-supervised and local rule build deep hierarchical representations of images, speech and video.
|
https://papers.nips.cc/paper_files/paper/2021/hash/feade1d2047977cd0cefdafc40175a99-Abstract.html
|
https://papers.nips.cc/paper_files/paper/2021/hash/feade1d2047977cd0cefdafc40175a99-Abstract.html
|
NIPS 2021
|
|||
MobTCast: Leveraging Auxiliary Trajectory Forecasting for Human Mobility Prediction
|
Hao Xue, Flora Salim, Yongli Ren, Nuria Oliver
|
Human mobility prediction is a core functionality in many location-based services and applications. However, due to the sparsity of mobility data, it is not an easy task to predict future POIs (place-of-interests) that are going to be visited. In this paper, we propose MobTCast, a Transformer-based context-aware network for mobility prediction. Specifically, we explore the influence of four types of context in mobility prediction: temporal, semantic, social, and geographical contexts. We first design a base mobility feature extractor using the Transformer architecture, which takes both the history POI sequence and the semantic information as input. It handles both the temporal and semantic contexts. Based on the base extractor and the social connections of a user, we employ a self-attention module to model the influence of the social context. Furthermore, unlike existing methods, we introduce a location prediction branch in MobTCast as an auxiliary task to model the geographical context and predict the next location. Intuitively, the geographical distance between the location of the predicted POI and the predicted location from the auxiliary branch should be as close as possible. To reflect this relation, we design a consistency loss to further improve the POI prediction performance. In our experimental results, MobTCast outperforms other state-of-the-art next POI prediction methods. Our approach illustrates the value of including different types of context in next POI prediction.
|
https://papers.nips.cc/paper_files/paper/2021/hash/fecf2c550171d3195c879d115440ae45-Abstract.html
|
https://papers.nips.cc/paper_files/paper/2021/hash/fecf2c550171d3195c879d115440ae45-Abstract.html
|
NIPS 2021
|
|||
Early Convolutions Help Transformers See Better
|
Tete Xiao, Mannat Singh, Eric Mintun, Trevor Darrell, Piotr Dollar, Ross Girshick
|
Vision transformer (ViT) models exhibit substandard optimizability. In particular, they are sensitive to the choice of optimizer (AdamW vs. SGD), optimizer hyperparameters, and training schedule length. In comparison, modern convolutional neural networks are easier to optimize. Why is this the case? In this work, we conjecture that the issue lies with the patchify stem of ViT models, which is implemented by a stride-p p×p convolution (p = 16 by default) applied to the input image. This large-kernel plus large-stride convolution runs counter to typical design choices of convolutional layers in neural networks. To test whether this atypical design choice causes an issue, we analyze the optimization behavior of ViT models with their original patchify stem versus a simple counterpart where we replace the ViT stem by a small number of stacked stride-two 3×3 convolutions. While the vast majority of computation in the two ViT designs is identical, we find that this small change in early visual processing results in markedly different training behavior in terms of the sensitivity to optimization settings as well as the final model accuracy. Using a convolutional stem in ViT dramatically increases optimization stability and also improves peak performance (by ∼1-2% top-1 accuracy on ImageNet-1k), while maintaining flops and runtime. The improvement can be observed across the wide spectrum of model complexities (from 1G to 36G flops) and dataset scales (from ImageNet-1k to ImageNet-21k). These findings lead us to recommend using a standard, lightweight convolutional stem for ViT models in this regime as a more robust architectural choice compared to the original ViT model design.
|
https://papers.nips.cc/paper_files/paper/2021/hash/ff1418e8cc993fe8abcfe3ce2003e5c5-Abstract.html
|
https://papers.nips.cc/paper_files/paper/2021/hash/ff1418e8cc993fe8abcfe3ce2003e5c5-Abstract.html
|
NIPS 2021
|
|||
Error Compensated Distributed SGD Can Be Accelerated
|
Xun Qian, Peter Richtarik, Tong Zhang
|
Gradient compression is a recent and increasingly popular technique for reducing the communication cost in distributed training of large-scale machine learning models. In this work we focus on developing efficient distributed methods that can work for any compressor satisfying a certain contraction property, which includes both unbiased (after appropriate scaling) and biased compressors such as RandK and TopK. Applied naively, gradient compression introduces errors that either slow down convergence or lead to divergence. A popular technique designed to tackle this issue is error compensation/error feedback. Due to the difficulties associated with analyzing biased compressors, it is not known whether gradient compression with error compensation can be combined with acceleration. In this work, we show for the first time that error compensated gradient compression methods can be accelerated. In particular, we propose and study the error compensated loopless Katyusha method, and establish an accelerated linear convergence rate under standard assumptions. We show through numerical experiments that the proposed method converges with substantially fewer communication rounds than previous error compensated algorithms.
|
https://papers.nips.cc/paper_files/paper/2021/hash/ff1ced3097ccf17c1e67506cdad9ac95-Abstract.html
|
https://papers.nips.cc/paper_files/paper/2021/hash/ff1ced3097ccf17c1e67506cdad9ac95-Abstract.html
|
NIPS 2021
|
|||
InfoGCL: Information-Aware Graph Contrastive Learning
|
Dongkuan Xu, Wei Cheng, Dongsheng Luo, Haifeng Chen, Xiang Zhang
|
Various graph contrastive learning models have been proposed to improve the performance of tasks on graph datasets in recent years. While effective and prevalent, these models are usually carefully customized. In particular, despite all recent work create two contrastive views, they differ in a variety of view augmentations, architectures, and objectives. It remains an open question how to build your graph contrastive learning model from scratch for particular graph tasks and datasets. In this work, we aim to fill this gap by studying how graph information is transformed and transferred during the contrastive learning process, and proposing an information-aware graph contrastive learning framework called InfoGCL. The key to the success of the proposed framework is to follow the Information Bottleneck principle to reduce the mutual information between contrastive parts while keeping task-relevant information intact at both the levels of the individual module and the entire framework so that the information loss during graph representation learning can be minimized. We show for the first time that all recent graph contrastive learning methods can be unified by our framework. Based on theoretical and empirical analysis on benchmark graph datasets, we show that InfoGCL achieves state-of-the-art performance in the settings of both graph classification and node classification tasks.
|
https://papers.nips.cc/paper_files/paper/2021/hash/ff1e68e74c6b16a1a7b5d958b95e120c-Abstract.html
|
https://papers.nips.cc/paper_files/paper/2021/hash/ff1e68e74c6b16a1a7b5d958b95e120c-Abstract.html
|
NIPS 2021
|
|||
Meta-Learning for Relative Density-Ratio Estimation
|
Atsutoshi Kumagai, Tomoharu Iwata, Yasuhiro Fujiwara
|
The ratio of two probability densities, called a density-ratio, is a vital quantity in machine learning. In particular, a relative density-ratio, which is a bounded extension of the density-ratio, has received much attention due to its stability and has been used in various applications such as outlier detection and dataset comparison. Existing methods for (relative) density-ratio estimation (DRE) require many instances from both densities. However, sufficient instances are often unavailable in practice. In this paper, we propose a meta-learning method for relative DRE, which estimates the relative density-ratio from a few instances by using knowledge in related datasets. Specifically, given two datasets that consist of a few instances, our model extracts the datasets' information by using neural networks and uses it to obtain instance embeddings appropriate for the relative DRE. We model the relative density-ratio by a linear model on the embedded space, whose global optimum solution can be obtained as a closed-form solution. The closed-form solution enables fast and effective adaptation to a few instances, and its differentiability enables us to train our model such that the expected test error for relative DRE can be explicitly minimized after adapting to a few instances. We empirically demonstrate the effectiveness of the proposed method by using three problems: relative DRE, dataset comparison, and outlier detection.
|
https://papers.nips.cc/paper_files/paper/2021/hash/ff49cc40a8890e6a60f40ff3026d2730-Abstract.html
|
https://papers.nips.cc/paper_files/paper/2021/hash/ff49cc40a8890e6a60f40ff3026d2730-Abstract.html
|
NIPS 2021
|
|||
Overcoming the curse of dimensionality with Laplacian regularization in semi-supervised learning
|
Vivien Cabannes, Loucas Pillaud-Vivien, Francis Bach, Alessandro Rudi
|
As annotations of data can be scarce in large-scale practical problems, leveraging unlabelled examples is one of the most important aspects of machine learning. This is the aim of semi-supervised learning. To benefit from the access to unlabelled data, it is natural to diffuse smoothly knowledge of labelled data to unlabelled one. This induces to the use of Laplacian regularization. Yet, current implementations of Laplacian regularization suffer from several drawbacks, notably the well-known curse of dimensionality. In this paper, we design a new class of algorithms overcoming this issue, unveiling a large body of spectral filtering methods. Additionally, we provide a statistical analysis showing that our estimators exhibit desirable behaviors. They are implemented through (reproducing) kernel methods, for which we provide realistic computational guidelines in order to make our method usable with large amounts of data.
|
https://papers.nips.cc/paper_files/paper/2021/hash/ff4d5fbbafdf976cfdc032e3bde78de5-Abstract.html
|
https://papers.nips.cc/paper_files/paper/2021/hash/ff4d5fbbafdf976cfdc032e3bde78de5-Abstract.html
|
NIPS 2021
|
|||
Unlabeled Principal Component Analysis
|
Yunzhen Yao, Liangzu Peng, Manolis Tsakiris
|
We introduce robust principal component analysis from a data matrix in which the entries of its columns have been corrupted by permutations, termed Unlabeled Principal Component Analysis (UPCA). Using algebraic geometry, we establish that UPCA is a well-defined algebraic problem in the sense that the only matrices of minimal rank that agree with the given data are row-permutations of the ground-truth matrix, arising as the unique solutions of a polynomial system of equations. Further, we propose an efficient two-stage algorithmic pipeline for UPCA suitable for the practically relevant case where only a fraction of the data have been permuted. Stage-I employs outlier-robust PCA methods to estimate the ground-truth column-space. Equipped with the column-space, Stage-II applies recent methods for unlabeled sensing to restore the permuted data. Experiments on synthetic data, face images, educational and medical records reveal the potential of UPCA for applications such as data privatization and record linkage.
|
https://papers.nips.cc/paper_files/paper/2021/hash/ff8c1a3bd0c441439a0a081e560c85fc-Abstract.html
|
https://papers.nips.cc/paper_files/paper/2021/hash/ff8c1a3bd0c441439a0a081e560c85fc-Abstract.html
|
NIPS 2021
|
|||
Causal-BALD: Deep Bayesian Active Learning of Outcomes to Infer Treatment-Effects from Observational Data
|
Andrew Jesson, Panagiotis Tigas, Joost van Amersfoort, Andreas Kirsch, Uri Shalit, Yarin Gal
|
Estimating personalized treatment effects from high-dimensional observational data is essential in situations where experimental designs are infeasible, unethical, or expensive. Existing approaches rely on fitting deep models on outcomes observed for treated and control populations. However, when measuring individual outcomes is costly, as is the case of a tumor biopsy, a sample-efficient strategy for acquiring each result is required. Deep Bayesian active learning provides a framework for efficient data acquisition by selecting points with high uncertainty. However, existing methods bias training data acquisition towards regions of non-overlapping support between the treated and control populations. These are not sample-efficient because the treatment effect is not identifiable in such regions. We introduce causal, Bayesian acquisition functions grounded in information theory that bias data acquisition towards regions with overlapping support to maximize sample efficiency for learning personalized treatment effects. We demonstrate the performance of the proposed acquisition strategies on synthetic and semi-synthetic datasets IHDP and CMNIST and their extensions, which aim to simulate common dataset biases and pathologies.
|
https://papers.nips.cc/paper_files/paper/2021/hash/ffa4eb0e32349ae57f7a0ee8c7cd7c11-Abstract.html
|
https://papers.nips.cc/paper_files/paper/2021/hash/ffa4eb0e32349ae57f7a0ee8c7cd7c11-Abstract.html
|
NIPS 2021
|
|||
Scalable Rule-Based Representation Learning for Interpretable Classification
|
Zhuo Wang, Wei Zhang, Ning Liu, Jianyong Wang
|
Rule-based models, e.g., decision trees, are widely used in scenarios demanding high model interpretability for their transparent inner structures and good model expressivity. However, rule-based models are hard to optimize, especially on large data sets, due to their discrete parameters and structures. Ensemble methods and fuzzy/soft rules are commonly used to improve performance, but they sacrifice the model interpretability. To obtain both good scalability and interpretability, we propose a new classifier, named Rule-based Representation Learner (RRL), that automatically learns interpretable non-fuzzy rules for data representation and classification. To train the non-differentiable RRL effectively, we project it to a continuous space and propose a novel training method, called Gradient Grafting, that can directly optimize the discrete model using gradient descent. An improved design of logical activation functions is also devised to increase the scalability of RRL and enable it to discretize the continuous features end-to-end. Exhaustive experiments on nine small and four large data sets show that RRL outperforms the competitive interpretable approaches and can be easily adjusted to obtain a trade-off between classification accuracy and model complexity for different scenarios. Our code is available at: https://github.com/12wang3/rrl.
|
https://papers.nips.cc/paper_files/paper/2021/hash/ffbd6cbb019a1413183c8d08f2929307-Abstract.html
|
https://papers.nips.cc/paper_files/paper/2021/hash/ffbd6cbb019a1413183c8d08f2929307-Abstract.html
|
NIPS 2021
|
|||
Bridging Non Co-occurrence with Unlabeled In-the-wild Data for Incremental Object Detection
|
NA DONG, Yongqiang Zhang, Mingli Ding, Gim Hee Lee
|
Deep networks have shown remarkable results in the task of object detection. However, their performance suffers critical drops when they are subsequently trained on novel classes without any sample from the base classes originally used to train the model. This phenomenon is known as catastrophic forgetting. Recently, several incremental learning methods are proposed to mitigate catastrophic forgetting for object detection. Despite the effectiveness, these methods require co-occurrence of the unlabeled base classes in the training data of the novel classes. This requirement is impractical in many real-world settings since the base classes do not necessarily co-occur with the novel classes. In view of this limitation, we consider a more practical setting of complete absence of co-occurrence of the base and novel classes for the object detection task. We propose the use of unlabeled in-the-wild data to bridge the non co-occurrence caused by the missing base classes during the training of additional novel classes. To this end, we introduce a blind sampling strategy based on the responses of the base-class model and pre-trained novel-class model to select a smaller relevant dataset from the large in-the-wild dataset for incremental learning. We then design a dual-teacher distillation framework to transfer the knowledge distilled from the base- and novel-class teacher models to the student model using the sampled in-the-wild data. Experimental results on the PASCAL VOC and MS COCO datasets show that our proposed method significantly outperforms other state-of-the-art class-incremental object detection methods when there is no co-occurrence between the base and novel classes during training.
|
https://papers.nips.cc/paper_files/paper/2021/hash/ffc58105bf6f8a91aba0fa2d99e6f106-Abstract.html
|
https://papers.nips.cc/paper_files/paper/2021/hash/ffc58105bf6f8a91aba0fa2d99e6f106-Abstract.html
|
NIPS 2021
|
|||
A Regression Approach to Learning-Augmented Online Algorithms
|
Keerti Anand, Rong Ge, Amit Kumar, Debmalya Panigrahi
|
The emerging field of learning-augmented online algorithms uses ML techniques to predict future input parameters and thereby improve the performance of online algorithms. Since these parameters are, in general, real-valued functions, a natural approach is to use regression techniques to make these predictions. We introduce this approach in this paper, and explore it in the context of a general online search framework that captures classic problems like (generalized) ski rental, bin packing, minimum makespan scheduling, etc. We show nearly tight bounds on the sample complexity of this regression problem, and extend our results to the agnostic setting. From a technical standpoint, we show that the key is to incorporate online optimization benchmarks in the design of the loss function for the regression problem, thereby diverging from the use of off-the-shelf regression tools with standard bounds on statistical error.
|
https://papers.nips.cc/paper_files/paper/2021/hash/ffeed84c7cb1ae7bf4ec4bd78275bb98-Abstract.html
|
https://papers.nips.cc/paper_files/paper/2021/hash/ffeed84c7cb1ae7bf4ec4bd78275bb98-Abstract.html
|
NIPS 2021
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.