Publications

Preprints


Image Technical Report Year
TCST2019

[arXiv] [Video]

Abstract: When deploying autonomous agents in unstructured environments over sustained periods of time, adaptability and robustness oftentimes outweigh optimality as a primary consideration. In other words, safety and survivability constraints play a key role and in this paper, we present a novel, constraint-learning framework for control tasks built on the idea of constraints-driven control. However, since control policies that keep a dynamical agent within state constraints over infinite horizons are not always available, this work instead considers constraints that can be satisfied over a sufficiently long time horizon T > 0, which we refer to as limited-duration safety. Consequently, value function learning can be used as a tool to help us find limited-duration safe policies. We show that, in some applications, the existence of limited-duration safe policies is actually sufficient for long-duration autonomy. This idea is illustrated on a swarm of simulated robots that are tasked with covering a given area, but that sporadically need to abandon this task to charge batteries. We show how the battery-charging behavior naturally emerges as a result of the constraints. Additionally, using a cart-pole simulation environment, we show how a control policy can be efficiently transferred from the source task, balancing the pole, to the target task, moving the cart to one direction without letting the pole fall down.

BibTex:
@article{ohnishi2019constraint,
title={Constraint Learning for Control Tasks with Limited Duration Barrier Functions},
author={Ohnishi, M. and Notomista, G. and Sugiyama, M. and Egerstedt, M.},
journal={arXiv preprint arXiv:1908.09506},
year={2019}
}

2019

Peer-reviewed journal articles / conference proceedings


Image Journal or Proceeding Year
TRO2019

[arXiv] [Xplore] [Video]

Abstract: This paper presents a safe learning framework that employs an adaptive model learning algorithm together with barrier certificates for systems with possibly nonstationary agent dynamics. To extract the dynamic structure of the model, we use a sparse optimization technique. We use the learned model in combination with control barrier certificates which constrain policies (feedback controllers) in order to maintain safety, which refers to avoiding particular undesirable regions of the state space. Under certain conditions, recovery of safety in the sense of Lyapunov stability after violations of safety due to the nonstationarity is guaranteed. In addition, we reformulate an action-value function approximation to make any kernel-based nonlinear function estimation method applicable to our adaptive learning framework. Lastly, solutions to the barrier-certified policy optimization are guaranteed to be globally optimal, ensuring the greedy policy improvement under mild conditions. The resulting framework is validated via simulations of a quadrotor, which has previously been used under stationarity assumptions in the safe learnings literature, and is then tested on a real robot, the brushbot, whose dynamics is unknown, highly complex and nonstationary.

BibTex:
@article{ohnishi2019barrier,
title={Barrier-certified adaptive reinforcement learning with applications to brushbot navigation},
author={Ohnishi, M. and Wang, L. and Notomista, G. and Egerstedt, M.},
journal={IEEE Trans. Robotics},
year={2019}
} }

2019
TSP2018

[arXiv] [Xplore]

Abstract: We propose a novel online learning paradigm for nonlinear-function estimation tasks based on the iterative projections in the L2 space with probability measure reflecting the stochastic property of input signals. The proposed learning algorithm exploits the reproducing kernel of the so-called dictionary subspace, based on the fact that any finite-dimensional space of functions has a reproducing kernel characterized by the Gram matrix. The L2-space geometry provides the best decorrelation property in principle. The proposed learning paradigm is significantly different from the conventional kernel-based learning paradigm in two senses: first, the whole space is not a reproducing kernel Hilbert space; and second, the minimum mean squared error estimator gives the best approximation of the desired nonlinear function in the dictionary subspace. It preserves efficiency in computing the inner product as well as in updating the Gram matrix when the dictionary grows. Monotone approximation, asymptotic optimality, and convergence of the proposed algorithm are analyzed based on the variable-metric version of adaptive projected subgradient method. Numerical examples show the efficacy of the proposed algorithm for real data over a variety of methods including the extended Kalman filter and many batch machine-learning methods such as the multilayer perceptron.

BibTex:
@article{ohnishi2018online,
title={Online Nonlinear Estimation via Iterative $ L\^{} 2$-Space Projections: Reproducing Kernel of Subspace},
author={Ohnishi, M. and Yukawa, M.},
journal={IEEE Trans. Signal Processing},
volume={66},
number={15},
pages={4050--4064},
year={2018}
}

2018
NeurIPS2018

[arXiv] [NeurIPS] [Poster] [Video] [RIKEN]

Abstract: Motivated by the success of reinforcement learning (RL) for discrete-time tasks such as AlphaGo and Atari games, there has been a recent surge of interest in using RL for continuous-time control of physical systems (cf. many challenging tasks in OpenAI Gym and DeepMind Control Suite). Since discretization of time is susceptible to error, it is methodologically more desirable to handle the system dynamics directly in continuous time. However, very few techniques exist for continuous-time RL and they lack flexibility in value function approximation. In this paper, we propose a novel framework for model-based continuous-time value function approximation in reproducing kernel Hilbert spaces. The resulting framework is so flexible that it can accommodate any kind of kernel-based approach, such as Gaussian processes and kernel adaptive filters, and it allows us to handle uncertainties and nonstationarity without prior knowledge about the environment or what basis functions to employ. We demonstrate the validity of the presented framework through experiments.

BibTex:
@inproceedings{ohnishi2018continuous,
title={Continuous-time value function approximation in reproducing kernel {H}ilbert spaces},
author={Ohnishi, M. and Yukawa, M. and Johansson, M. and Sugiyama, M.},
booktitle={Advances in Neural Information Processing Systems},
pages={2813--2824},
year={2018}
}

2018
noimage

[Xplore]

Abstract: This paper presents a safe learning framework that employs an adaptive model learning algorithm together with barrier certificates for systems with possibly nonstationary agent dynamics. To extract the dynamic structure of the model, we use a sparse optimization technique. We use the learned model in combination with control barrier certificates which constrain policies (feedback controllers) in order to maintain safety, which refers to avoiding particular undesirable regions of the state space. Under certain conditions, recovery of safety in the sense of Lyapunov stability after violations of safety due to the nonstationarity is guaranteed. In addition, we reformulate an action-value function approximation to make any kernel-based nonlinear function estimation method applicable to our adaptive learning framework. Lastly, solutions to the barrier-certified policy optimization are guaranteed to be globally optimal, ensuring the greedy policy improvement under mild conditions. The resulting framework is validated via simulations of a quadrotor, which has previously been used under stationarity assumptions in the safe learnings literature, and is then tested on a real robot, the brushbot, whose dynamics is unknown, highly complex and nonstationary.

BibTex:
@inproceedings{ohnishi2017online,
title={Online learning in L2 space with multiple {G}aussian kernels},
author={Ohnishi, M. and Yukawa, M.},
booktitle={IEEE Proc.~EUSIPCO},
pages={1594--1598},
year={2017}
}

2017

Workshops & Invited Talks


Title / Organizer Presentation Place Year
Adaptive Safe Learning and Continuous-time Reinforcement Learning
(Orginizer: Prof. Stefanos Nikolaidis, University of Southern California)
Invited TalkUniversity of Southern California @USA2019
Continuous-time Value Function Approximation in Reproducing Kernel Hilbert Spaces
(Orginizer: RIKEN AIP / Israeli universities/institutes)
PosterBar-Ilan Univ. @Tel Aviv2018
Online Nonlinear Estimation via Iterative L2-Space Projections
(Orginizer: RIKEN AIP / National University of Singapore)
PosterNational University of Singapore @Singapore 2018

Theses


Title / Degree University Year
Safety-aware Adaptive Reinforcement Learning with Applications to Brushbot Navigation [KTH DiVA]
Degree: M.S. in Electrical Engineering
Royal Institute of Technology, Sweden
Department: Automatic Control
2019
A Study on Hilbert Space Design: Online Learning and Reinforcement Learning
Degree: M.S. in Integrated Design Engineering
Keio University, Japan
Department: Electronics and Electrical Engineering
2019

Awards

Fellowships or Grants


Name Organization Month/Year

Based on academic merit, a repayment exemption was granted by Japan Student Services Organization for the M.S. study (2016-2019)

Japan Student Services Organization (JASSO)06/2019

Fellowship for the first academic year of my Ph.D. study (2019-2020). See here.

University of Washington05/2019

Fellowship for the Ph.D. study abroad. This funding is expected to cover two years of my Ph.D. study (2020-2022). See here.

Funai foundation11/2018

Travel grant awarded for our paper "Continuous-time Value Function Approximations in Reproducing Kernel Hilbert Spaces" presented at NeurIPS 2018, Montreal, Canada.

NeurIPS foundation10/2018

Scholarship awarded to selected Keio university students studying abroad.

Keio university12/2017

Research grant for master students presenting their researches at international conferences. See here.

Keio university07/2017

Travel grant awarded by the School of Electrical Engineering, Royal Institute of Technology, to selected students conducting their master thesis projects outside of Sweden. See here.

School of Electrical Engineering, KTH07/2017

Research grant awarded for selected research projects related to Scandinavian countries. See here.

Scandinavia-Japan Sasakawa foundation03/2017

Scholarship awarded to selected Keio university students studying abroad.

Keio university12/2016

Academic awards


Name Organization Month/Year

The best undergraduate research award in the field of information technology in Dept. Electronics and Electrical Engineering, Keio university

Dept. Electronics and Electrical Engineering, Keio university02/2016