Gmargo11/hDQN: Implementation Of Hierarchical Deep Q ... - GitHub

Skip to content Dismiss alert {{ message }} / hDQN Public
  • Notifications You must be signed in to change notification settings
  • Fork 7
  • Star 36
  • Code
  • Issues 1
  • Pull requests
  • Actions
  • Projects
  • Security
  • Insights
Additional navigation options  masterBranchesTagsGo to fileCode

Folders and files

NameNameLast commit messageLast commit date

Latest commit

 

History

8 Commits
__pycache____pycache__  
agentsagents  
envsenvs  
utilsutils  
README.mdREADME.md  
paper.pdfpaper.pdf  
presentation.pdfpresentation.pdf  
run_tests.pyrun_tests.py  
run_tests_continuous.pyrun_tests_continuous.py  
run_tests_mdp.pyrun_tests_mdp.py  
View all files

Repository files navigation

  • README
hDQN

Replication of the first experiment of Hierarchical Deep Reinforcement Learning: Integrating Temporal Abstraction and Intrinsic Motivation (Kulkarni et al., 2016) (view here).

Download the report here.

View the presentation here.

This work was done as a class project for MIT 6.882: Embodied Intelligence. Credit is due to Professor Tomas Lozano-Perez for providing valuable feedback on my approach. Credit is also due to a previous replication attempt of the hierarchical-DQN paper, which did not successfully replicate the results but inspired aspects of this implementation: https://github.com/EthanMacdonald/h-DQN

About

Implementation of Hierarchical Deep Q-Learning (Kulkarni et al., 2016)

Resources

Readme

Uh oh!

There was an error while loading. Please reload this page.

Activity

Stars

36 stars

Watchers

0 watching

Forks

7 forks Report repository

Releases

No releases published

Packages

Uh oh!

There was an error while loading. Please reload this page.

Contributors

Uh oh!

There was an error while loading. Please reload this page.

Languages

  • Python 100.0%
You can’t perform that action at this time.

Từ khóa » H-dqn