I am a Ph.D. student working with Prof. Byron Boots and Prof. Joshua Smith at The Paul G. Allen School of Computer Science and Engineering at the University of Washington. Before that, I got my B.S and M.S degrees in Electrical Engineering from the University of Washington in 2015 and 2018 respectively. I love to work on problems that help robots to interact with the real world, from manipulation and perception to HRI. My current project is centered on experimenting robot as competitive agent in certain human-robot interaction scenarios. By endowing the robot with adversarial behaviors, the task comes more challenging(fun, engaging, and effective) to the human. In particular, we aim to create a robot that physically reacts to a human’s body movement in entertainment, athletic training, and physical therapy applications using multi-agent reinforcement learning. I also worked on projects that focused on developing methods for precise sequential robotic manipulation that use deep learning and pre-touch sensing(proximity sensor mounted on manipulator). In which, I successfully enabled a PR2 (a general purpose dual arms humanoid robot) to solve any 3X3 Rubik's Cube under 7 minutes, and generalized this sensing method to every day objects manipulation using deep learning.

CV (June 2019)

Research Projects

Competitive Human-Robot Interaction

This is an on going research project with two main objectives: (1) Developing robotic systems with real-time strategic decision making capability and high body agility that are comparable to that of humans. (2) Studying how to make positive impacts to our daily life via competitive-HRI.

Yang, Boling, et al. Stackelberg MADDPG: Learning Emergent Behaviors via Information Asymmetry in Competitive Games. preprint PDF

Yang, Boling, et al. Motivating Physical Activity via Competitive Human-Robot Interaction. Conference on Robot Learning. CoRL, 2021. PDF Selected for Oral Presentation (Top 6.5%)

Yang, Boling, et al. Competitive Physical Human-Robot Game Play. AAAI Conference on Artificial Intelligence, Workshop on Reinforcement Learning in Games. 2021 PDF

Benchmarking Robot Manipulation with the Rubik’s Cube

Benchmarks for robot manipulation are crucial to measuring progress in the field, yet there are few benchmarks that demonstrate critical manipulation skills, possess standardized metrics, and can be attempted by a wide array of robot platforms. To address a lack of such benchmarks, we propose Rubik's cube manipulation as a benchmark to measure simultaneous performance of precise manipulation and sequential manipulation. The sub-structure of the Rubik's cube demands precise positioning of the robot's end effectors, while its highly reconfigurable nature enables tasks that require the robot to manage pose uncertainty throughout long sequences of actions. We present a protocol for quantitatively measuring both the accuracy and speed of Rubik's cube manipulation. This protocol can be attempted by any general-purpose manipulator, and only requires a standard 3x3 Rubik's cube and a flat surface upon which the Rubik's cube initially rests (e.g. a table). We demonstrate this protocol for two distinct baseline approaches on a PR2 robot. The first baseline provides a fundamental approach for pose-based Rubik's cube manipulation. The second baseline demonstrates the benchmark's ability to quantify improved performance by the system, particularly that resulting from the integration of pre-touch sensing. To demonstrate the benchmark's applicability to other robot platforms and algorithmic approaches, we present the functional blocks required to enable the HERB robot to manipulate the Rubik's cube via push-grasping.

Yang, B., Lancaster, P., Srinivasa, S.S., & Smith, J.R. (2020, September). Benchmarking Robot Manipulation with the Rubik’s Cube. Robotics & Automation Letter Special Issue: Benchmarking Protocols for Robotic Manipulation, 2020 IEEE/RA-L PDF Benchmark Protocol

Contact-less Manipulation of Millimeter-scale Objects via Ultrasonic Levitation

Ultrasonic levitation devices have been shown to levitate a large range of objects, from polystyrene balls to living organisms. The material agnostic nature of acoustic levitation devices and its ability to dexterously manipulate millimeter-scale objects make it appealing as a mode of manipulation for general purpose robots to precisely manipulate small and fragile objects. Some of the additional advantages of this technology include compensating for robot manipulator positioning uncertainty and controlling object grasping force by phase controlled acoustic force fields. In this work, we present an ultrasonic, contact-less manipulation device capable of performing the very first phase-controlled picking action on acoustically reflective surfaces. The picking action of the manipulator is aided by a simulation that models the force dynamics inside the levitator and facilitates a better acoustic emitter geometry design. With the manipulator placed around the target object, the manipulator can grasp objects smaller in size than the robot's positioning uncertainty, trap the object to resist air currents during robot movement, and dexterously hold a small and fragile object, like a flower bud. Since the ultrasound-based gripper is contact-less, a camera positioned to look into the cylinder can inspect of the object with no occlusion, facilitating accurate visual feature extraction.

Nakahara, J., Yang, B., Smith, J.R. Contact-less Manipulation of Millimeter-scale Objects via Ultrasonic Levitation. In 2020 8th IEEE RAS/EMBS International Conference for Biomedical Robotics and Biomechatronics (BioRob), pp. 264-271. IEEE, 2020. PDF

Improved Object Pose Estimation via Deep Pre-touch Sensing

For certain manipulation tasks, object pose estimation from head-mounted cameras may not be sufficiently accurate. This is at least in part due to our inability to perfectly calibrate the coordinate frames of today’s high degree of freedom robot arms that link the head to the end-effectors. We present a novel framework combining pre-touch sensing and deep learning to more accurately estimate pose in an efficient manner. The use of pre-touch sensing allows our method to localize the object directly with respect to the robot’s end effector, thereby avoiding error caused by miscalibration of the arms. Instead of requiring the robot to scan the entire object with its pre-touch sensor, we use a deep neural network to detect object regions that contain distinctive geometric features. By focusing pre-touch sensing on these regions, the robot can more efficiently gather the information necessary to adjust its original pose estimate. Our region detection network was trained using a new dataset containing objects of widely varying geometries and has been labeled in a scalable fashion that is free from human bias. This dataset is applicable to any task that involves a pre-touch sensor gathering geometric information, and has been made publicly available. We evaluate our framework by having the robot re-estimate the pose of a number of objects of varying geometries. Compared to two simpler region proposal methods,we find that our deep neural network performs significantly better. In addition, we find that after a sequence of scans, objects can typically be localized to within 0.5 cm of their true position. We also observe that the original pose estimate can often be significantly improved after collecting a single quick scan.

Lancaster, P., Yang, B., & Smith, J.R. (2017, September). Improved Object Pose Estimation via Deep Pre-touch Sensing. In Intelligent Robots and Systems (IROS), 2017 IEEE/RSJ International Conference on. IEEE. PDF Video

Pre-touch Sensing for Sequential Manipulation

The primary focus of this work is to examine how robots can achieve more robust sequential manipulation through the use of pre-touch sensors. The utility of close-range proximity sensing is evaluated through a robotic system that uses a new optical time-of-flight pre-touch sensor to complete a highly precise and sequential task - solving the Rubik’s cube. The techniques used in this task are then extended to a more general framework in which ICP is used to match pre-touch data to a reference model, demonstrating that even simple pretouch scans can be used to recover the pose of common objects that require sequential manipulation.

Yang, B, Lancaster, P., & Smith, J.R. (2017, May). Pre-touch Sensing for Sequential Manipulation. Robotics and Automation (ICRA), 2017 IEEE/RSJ International Conference on. IEEE. PDF Video Sensor


Yang, B., Lancaster, P., & Smith, J.R. (2016, June). Prospects for Combining Task and Motion Planning for Bi-Manual Solution of the Rubik's Cube. In Robotics: Science and Systems 2016. Poster Abstract Video

Yang, B., Lancaster, P., & Smith, J.R. (2018, June). Physical Human-Robot Adversarial Gameplay. In Robotics: Science and Systems 2018. Abstract


Singer, Music Composer, Guitarist, Competitive RC Model Racer, Badminton Player