Sitemap

A list of all the posts and pages found on the site. For you robots out there is an XML version available for digesting as well.

Pages

Posts

portfolio

publications

Resource Constrained Deep Reinforcement Learning

Published in Proceedings of the International Conference on Automated Planning and Scheduling, 2019

TL;DR: Deep RL to optimize constrained resource allocation at city scale. Good results on realistic datasets. Read more

Recommended citation: Bhatia, A., Varakantham, P., & Kumar, A. (2019). Resource Constrained Deep Reinforcement Learning. In Proceedings of the International Conference on Automated Planning and Scheduling, 29(1), 610-620. https://ojs.aaai.org/index.php/ICAPS/article/view/3528

Tuning the Hyperparameters of Anytime Planning: A Deep Reinforcement Learning Approach

Published in ICAPS 2021 Workshop on Heuristics and Search for Domain-independent Planning, 2021

TL;DR: Deep RL to control hyperparameters of anytime algorithms at runtime to optimize quality of the final solution. Good results on Anytime A* search algorithm. Read more

Recommended citation: Bhatia, A., Svegliato, J., & Zilberstein, S. (2021). Tuning the Hyperparameters of Anytime Planning: A Deep Reinforcement Learning Approach. In ICAPS 2021 Workshop on Heuristics and Search for Domain-independent Planning. https://openreview.net/forum?id=c7hpFp_eRCo

On the Benefits of Randomly Adjusting Anytime Weighted A*

Published in Proceedings of the International Symposium on Combinatorial Search, 2021

TL;DR: Randomized Weighted A* tunes the weight in Anytime Weighted A* randomly at runtime and outperforms every static weighted baseline. Read more

Recommended citation: Bhatia, A., Svegliato, J., & Zilberstein, S. (2021). On the Benefits of Randomly Adjusting Anytime Weighted A. In Proceedings of the International Symposium on Combinatorial Search (Vol. 12, No. 1, pp. 116-120). https://ojs.aaai.org/index.php/SOCS/article/view/18558

Adaptive Rollout Length for Model-Based RL Using Model-Free Deep RL

Published in arXiv preprint arXiv:2206.02380, 2022

TL;DR: Meta-level deep RL to adapt the rollout-length in model-based RL non-myopically based on feedback from the learning process, such as accuracy of the model, learning progress and scarcity of samples. Read more

Recommended citation: Bhatia, A., Thomas, PS., & Zilberstein, S. (2022). Adaptive Rollout Length for Model-Based RL Using Model-Free Deep RL. In arXiv preprint arXiv:2206.02380. https://arxiv.org/abs/2206.02380

Tuning the Hyperparameters of Anytime Planning: A Metareasoning Approach with Deep Reinforcement Learning

Published in Proceedings of the International Conference on Automated Planning and Scheduling, 2022

TL;DR: Deep RL to determine optimal stopping point and hyperparameters of anytime algorithms at runtime to optimize utility of the final solution. Good results on Anytime A* search algorithm and RRT* motion planning algorithm. Read more

Recommended citation: Bhatia, A., Svegliato, J., Nashed, S. B., & Zilberstein, S. (2022). Tuning the Hyperparameters of Anytime Planning: A Metareasoning Approach with Deep Reinforcement Learning. In Proceedings of the International Conference on Automated Planning and Scheduling, 32(1), 556-564. https://ojs.aaai.org/index.php/ICAPS/article/view/19842

Selecting the Partial State Abstractions of MDPs: A Metareasoning Approach with Deep Reinforcement Learning

Published in IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2022

Read more

Recommended citation: Nashed, S.B., Svegliato, J., Bhatia, A., Russell S., Zilberstein, S. (2022). Selecting the partial state abstractions of MDPs: A metareasoning approach with deep reinforcement learning. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems.

talks

teaching

Teaching experience 1

Undergraduate course, University 1, Department, 2014

This is a description of a teaching experience. You can use markdown like any other post. Read more

Teaching experience 2

Workshop, University 1, Department, 2015

This is a description of a teaching experience. You can use markdown like any other post. Read more