1. A Concise Introduction to Models and Methods for Automated Planning
  2. Active Perception: Using Goal Context to Guide Sensing and Other Actions
  3. A Disaster Response System based on Human-Agent Collectives
  4. A Game-Theoretic Planning Framework for Intentional Threat Assessment
  5. A Linearly Relaxed Approximate Linear Program for Markov Decision Processes
  6. An MDP-based Approximation Method for Goal Constrained Multi-MAV Planning under Action Uncertainty
  7. ASAP-UCT: abstraction of state-action pairs in UCT
  8. A Tutorial on Linear Function Approximators for Dynamic Programming and Reinforcement Learning
  9. Automated Verication and Strategy Synthesis for Probabilistic Systems
  10. Bottom–Up Learning of Hierarchical Models in a Class of Deterministic Pomdp Environments
  11. Bridging Theory and Practice in Cache Replacement
  12. Cold-Start Collaborative Filtering
  13. Computing Machines as Agents: An Interface Layer for joint advanced Networking
  14. Confidence-based progress-driven self-generated goals for skill acquisition in developmental robots
  15. Constraining a MDP’s Search Space by Usage of Planning Trajectories
  16. Decision Making for Automated Vehicles in Merging Situations using Partially Observable Markov Decision Processes
  17. Deliberation for autonomous robots: A survey
  18. Design and Analysis of Spatially-Partitioned Shared Caches
  19. Designing Incentive for Cooperative Problem Solving in Crowdsourcing
  20. Dissertation Abstract: Exploiting Symmetries in Sequential Decision Making under Uncertainty
  21. Dynamic Decision Making Model for Wildfire Containment
  22. Equi-Reward Utility Maximizing Design in Stochastic Environments
  23. Evaluation of Linear relaxations in Ad Network optimization for online marketing
  24. Exact and approximate probabilistic symbolic execution for nondeterministic programs
  25. Goal Recognition Design with Stochastic Agent Action Outcomes
  26. Heuristically accelerated reinforcement learning modularization for multi-agent multi-objective problems
  27. Heuristic Search in Dual Space for Constrained Stochastic Shortest Path Problems
  28. Hierarchical decomposition of multi-agent Markov decision processes with application to health aware planning
  29. Human-Agent Collaboration for Disaster Response
  30. Human Choice and Good Choice
  31. Hybrid Planning for Decision Making in Self-Adaptive Systems
  32. Hybrid Planning For Self-Adaptation
  33. Integrated task and motion planning in belief space
  34. Investigation of Learning Strategies for the SPOT Broker in Power TAC
  35. Learning Continuous State/Action Models for Humanoid Robots
  36. Learning Efficient Representations for Reinforcement Learning
  37. Learning Uncertainty in Ocean Current Predictions for Safe and Reliable Navigation of Underwater Vehicles
  38. Making Good Decisions Quickly
  39. Markov Reward Models and Markov Decision Processes in Discrete and Continuous Time: Performance Evaluation and Optimization
  40. Maximizing Cache Performance Under Uncertainty
  41. Methods to analyze responding to, counteracting, and utilizing sociocultural behaviors
  42. Modelagem e soluções para redes de anúncios.
  43. Multiagent Learning: dynamic games & applications
  44. Multi-Objective Planning with Multiple High Level Task Specifications
  45. Network Coding for Hop-by-Hop Communication Enhancement in Multi-hop Networks
  46. Обзор применения формальных методов в робототехнике
  47. Occupation Measure Heuristics for Probabilistic Planning
  48. OGA-UCT: On-the-Go Abstractions in UCT
  49. On Optimal Policies for Network Coded Cooperation: Theory and Implementation
  50. On the Probabilistic Symbolic Analysis of Programs
  51. Optimized Control for Water Utilities
  52. Plan Aggregation for Strong Cyclic Planning in Nondeterministic Domains
  53. Planejamento probabilistico como busca num espaco de transicao de estados
  54. Planning for stochastic games with co-safe objectives
  55. POMDP-based Control of Workflows for Crowdsourcing
  56. Policy Explanation and Model Refinement in Decision-Theoretic Planning
  57. Probabilistic Modeling of Dynamic Systems
  58. Providing Serializability for Pregel-like Graph Processing Systems
  59. Real-time Elective Admissions Planning for Health Care Providers
  60. Redesigning Stochastic Environments for Maximized Utility.
  61. Reinforcement Learning in Differential Games: A Learning Invader for the Guarding a Territory Game
  62. Risk-aware path planning for autonomous underwater vehicles
  63. Robotics and Artificial Intelligence: a Perspective on Deliberation Functions
  64. Sample-based abstraction for hybrid relational MDPs
  65. Scalable Approximate Policies for Markov Decision Process Models of Hospital Elective Admissions
  66. Scalable Methods and Expressive Models for Planning Under Uncertainty
  67. Simple Regret Optimization in Online Planning for Markov Decision Processes
  68. Survey on Formal Methods in Robotics
  69. The Stochastic Shortest Path Problem: A polyhedral combinatorics perspective
  70. To UCT, or not to UCT? (Position Paper)
  71. Verification of Markov Decision Processes using Learning Algorithms