Kiana Ehsani
PhD Student in Computer Science
University of Washington
Seattle

About Me



I'm a PhD student at University of Washington in the Paul G. Allen School of Computer Science & Engineering. Currently, I am working as a research assistant under supervision of professor Ali Farhadi.

My research interest is in computer vision and machine learning. In my current projects, I'm mainly working on representation learning.


Resume

Projects

Here are some of the research projects I have been working on.


Learning to Learn how to Learn: Self-Adaptive Visual Navigation using Meta-Learning

M Wortsman, K Ehsani, M Rastegari, A Farhadi and R Mottaghi (CVPR19, Oral Presentation)

There is a lot to learn about a task by actually attempting it! Learning is continuous, i.e. we learn as we perform. Traditional navigation approaches freeze the model during inference (top row in the intuition figure above). In this paper, we propose a self-addaptive agent for visual navigation that learns via self-supervised interaction with the environment (bottom row in the intuition figure above).

SAVN is a network that
  1. Learns to adapt to new environments without any explicit supervision,
  2. Uses meta-reinforcement learning approach where an agent learns a self-supervised interaction loss that encourages effective navigation,
  3. And shows major improvements in both success rate and SPL for visual navigation in novel scenes.

PDF Project Page Video Presentation


SeGAN: Segmenting and Generating the Invisible

K Ehsani, R Mottaghi, A Farhadi (CVPR18, spotlight)

Humans have strong ability to make inferences about the appearance of the invisible and occluded parts of scenes. For example, when we look at the scene on the left we can make predictions about what is behind the coffee table, and can even complete the sofa based on the visible parts of the sofa, the coffee table, and what we know in general about sofas and coffee tables and how they occlude each other.

SeGAN can learn to
  1. Generate the appearance of the occluded parts of objects,
  2. Segment the invisible parts of objects,
  3. Although trained on synthetic photo realistic images reliably segment natural images,
  4. By reasoning about occluder-occludee relations infer depth layering.

PDF Project Page Presentation


Who Let The Dogs Out? Modeling Dog Behavior From Visual Data

K.Ehsani, H.Bagherinezhad, J. Redmon, R. Mottaghi, A. Farhadi (CVPR18)

Dogs are intelligent. Let's learn from them! We introduce DECADE, a dataset of ego-centric videos from a dog’s perspective as well as her corresponding movements.

In this paper we propose a model that can
  1. Learn to act like a dog (Predict the dog’s future moves),
  2. Learn to plan like a dog (Estimate a sequence of movements that take the state of the dog’s world from what is observed at a given time to a desired observed state),
  3. Learn from a dog (Exploring the potentials of using the dog's movements for representation learning),

PDF Project Page TechCrunch Report Reuters Report IEEE Spectrum Report NBC News Report BBC Report

Contact me?