ABSTRACT
Digital hardware is verified by comparing its behavior against a reference model on a range of randomly generated input signals. The random generation of the inputs hopes to achieve sufficient coverage of the different parts of the design. However, such coverage is often difficult to achieve, amounting to large verification efforts and delays. An alternative is to use Reinforcement Learning (RL) to generate the inputs by learning to prioritize those inputs which can more efficiently explore the design under test. In this work, we present VeRLPy [3], an open-source library to allow RL-driven verification with limited additional engineering overhead. This contributes to two broad movements within the EDA community of (a) moving to open-source tool chains and (b) reducing barriers for development with Python support. We also demonstrate the use of VeRLPy for a few designs and establish its value over randomly generated input signals.
- 2016. SHAKTI Processor Program. https://gitlab.com/shaktiproject. Accessed: 2021-09-20.Google Scholar
- 2021. SHAKTIMAAN. https://github.com/iitm-sysdl/SHAKTIMAAN. Accessed: 2021-09-20.Google Scholar
- 2021. VeRLPy. https://github.com/aebeljs/VeRLPy. Accessed: 2021-09-20.Google Scholar
- Marcin Andrychowicz, Filip Wolski, Alex Ray, Jonas Schneider, Rachel Fong, Peter Welinder, Bob McGrew, Josh Tobin, Pieter Abbeel, and Wojciech Zaremba. 2018. Hindsight Experience Replay. arxiv:1707.01495 [cs.LG]Google Scholar
- Justin A. Boyan and Michael L. Littman. 1993. Packet Routing in Dynamically Changing Networks: A Reinforcement Learning Approach. In Proceedings of the 6th International Conference on Neural Information Processing Systems(Denver, Colorado) (NIPS’93). Morgan Kaufmann Publishers Inc., San Francisco, CA, USA, 671–678.Google Scholar
- Greg Brockman, Vicki Cheung, Ludwig Pettersson, Jonas Schneider, John Schulman, Jie Tang, and Wojciech Zaremba. 2016. OpenAI Gym. arXiv:arXiv:1606.01540Google Scholar
- Yi-Ren Chen, Amir Rezapour, Wen-Guey Tzeng, and Shi-Chun Tsai. 2020. RL-Routing: An SDN Routing Algorithm Based on Deep Reinforcement Learning. IEEE Transactions on Network Science and Engineering 7, 4(2020), 3185–3199. https://doi.org/10.1109/TNSE.2020.3017751Google ScholarCross Ref
- Gabriel Dulac-Arnold, Richard Evans, Hado van Hasselt, Peter Sunehag, Timothy Lillicrap, Jonathan Hunt, Timothy Mann, Theophane Weber, Thomas Degris, and Ben Coppin. 2016. Deep Reinforcement Learning in Large Discrete Action Spaces. arxiv:1512.07679 [cs.AI]Google Scholar
- Benjamin Ellenberger. 2018. PyBullet Gymperium. https://github.com/benelot/pybullet-gym. Accessed: 2021-07-20.Google Scholar
- Anna Goldie and Azalia Mirhoseini. 2020. Placement optimization with deep reinforcement learning. In Proceedings of the 2020 International Symposium on Physical Design. 3–7.Google ScholarDigital Library
- Jorge Gómez. 2019. NASGym. https://github.com/gomerudo/nas-env. Accessed: 2021-07-20.Google Scholar
- Tuomas Haarnoja, Aurick Zhou, Pieter Abbeel, and Sergey Levine. 2018. Soft Actor-Critic: Off-Policy Maximum Entropy Deep Reinforcement Learning with a Stochastic Actor. arxiv:1801.01290 [cs.LG]Google Scholar
- Mohammad Amin Haghpanah. 2019. gym-anytrading. https://github.com/AminHP/gym-anytrading. Accessed: 2021-07-20.Google Scholar
- Chris Higgs and Stuart Hodgeson. 2013. cocotb. https://github.com/cocotb/cocotb Accessed: 2021-07-20.Google Scholar
- Eddie Huang. 2019. GymGo: The Board Game Go. https://github.com/aigagror/GymGo. Accessed: 2021-07-20.Google Scholar
- Guyue Huang, Jingbo Hu, Yifan He, Jialong Liu, Mingyuan Ma, Zhaoyang Shen, Juejian Wu, Yuanfan Xu, Hengrui Zhang, Kai Zhong, Xuefei Ning, Yuzhe Ma, Haoyu Yang, Bei Yu, Huazhong Yang, and Yu Wang. 2021. Machine Learning for Electronic Design Automation: A Survey. arxiv:2102.03357 [cs.AI]Google Scholar
- William Hughes, Sandeep Srinivasan, Rohit Suvarna, and Maithilee Kulkarni. 2019. Optimizing Design Verification using Machine Learning: Doing better than Random. arxiv:1909.13168 [cs.LG]Google Scholar
- Myoung Hoon Lee and Jun Moon. 2021. Deep Reinforcement Learning-based UAV Navigation and Control: A Soft Actor-Critic with Hindsight Experience Replay Approach. arxiv:2106.01016 [eess.SY]Google Scholar
- Ning Liu, Zhe Li, Zhiyuan Xu, Jielong Xu, Sheng Lin, Qinru Qiu, Jian Tang, and Yanzhi Wang. 2017. A Hierarchical Framework of Cloud Resource Allocation and Power Management Using Deep Reinforcement Learning. arxiv:1703.04221 [cs.DC]Google Scholar
- Hongzi Mao, Mohammad Alizadeh, Ishai Menache, and Srikanth Kandula. 2016. Resource Management with Deep Reinforcement Learning. In Proceedings of the 15th ACM Workshop on Hot Topics in Networks (Atlanta, GA, USA) (HotNets ’16). Association for Computing Machinery, New York, NY, USA, 50–56. https://doi.org/10.1145/3005745.3005750Google ScholarDigital Library
- Azalia Mirhoseini, Anna Goldie, Mustafa Yazgan, Joe Jiang, Ebrahim Songhori, Shen Wang, Young-Joon Lee, Eric Johnson, Omkar Pathak, Sungmin Bae, Azade Nazi, Jiwoo Pak, Andy Tong, Kavya Srinivasa, William Hang, Emre Tuncer, Anand Babu, Quoc V. Le, James Laudon, Richard Ho, Roger Carpenter, and Jeff Dean. 2020. Chip Placement with Deep Reinforcement Learning. arxiv:2004.10746 [cs.LG]Google Scholar
- Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A Rusu, Joel Veness, Marc G Bellemare, Alex Graves, Martin Riedmiller, Andreas K Fidjeland, Georg Ostrovski, 2015. Human-level control through deep reinforcement learning. nature 518, 7540 (2015), 529–533.Google Scholar
- Deepak Pathak, Pulkit Agrawal, Alexei A. Efros, and Trevor Darrell. 2017. Curiosity-driven Exploration by Self-supervised Prediction. arxiv:1705.05363 [cs.LG]Google Scholar
- Antonin Raffin, Ashley Hill, Maximilian Ernestus, Adam Gleave, Anssi Kanervisto, and Noah Dormann. 2019. Stable Baselines3. https://github.com/DLR-RM/stable-baselines3. Accessed: 2021-07-20.Google Scholar
- John Schulman, Sergey Levine, Philipp Moritz, Michael I. Jordan, and Pieter Abbeel. 2017. Trust Region Policy Optimization. arxiv:1502.05477 [cs.LG]Google Scholar
- John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. 2017. Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347(2017).Google Scholar
- Christopher JCH Watkins and Peter Dayan. 1992. Q-learning. Machine learning 8, 3-4 (1992), 279–292.Google Scholar
- Yufei Ye, Xiaoqin Ren, Jin Wang, Lingxiao Xu, Wenxia Guo, Wenqiang Huang, and Wenhong Tian. 2018. A New Approach for Resource Scheduling with Deep Reinforcement Learning. arxiv:1806.08122 [cs.AI]Google Scholar
Comments