如何解决AssertionError:defaultdict<函数mc_control_importance_sampling<locals><lambda> at 0x7f31699ffe18>
我一直在使用稳定的基准和具有3个动作的离散环境来制作DQN。
以供参考
class MyCell(torch.nn.Module):
def __init__(self):
super(MyCell,self).__init__()
self.linear = torch.nn.Linear(4,4)
def forward(self,x,h):
new_h = torch.tanh(self.linear(x) + h)
return new_h,new_h
my_cell = MyCell()
x,h = torch.rand(3,4),torch.rand(3,4)
traced_cell = torch.jit.script(my_cell,(x,h))
print(traced_cell)
traced_cell(x,h)
但是我的蒙特卡洛方法的辅助函数存在一些问题
env = gym.make('fishing-v0')
model = DQN(MlpPolicy,env,verbose=2)
trained_model = model.learn(total_timesteps=10000)
当我调用函数时,
def mc_control_importance_sampling(env,num_episodes,discount = .99):
"""
Monte Carlo Control Off-Policy Control using Weights for Sampling.
Finds an optimal greedy policy.
"""
# creates Q dictionary that maps obs to action values
Q = defaultdict(lambda: np.zeros(env.action_space))
#dictionary for weights
C = defaultdict(lambda: np.zeros(env.action_space))
# learn greedy policy
target_policy = env.step(Q)
for i_episode in range(1,num_episodes + 1):
if i_episode % 1 == 0:
print("\rEpisode {}/{}.".format(i_episode,num_episodes),end="")
# Generate an episode to be tuple (state,action,reward) tuples
episode = []
obs = env.reset()
for t in range(100):
# Sample an action from our policy
action,_states = trained_model.predict(obs)
next_state,reward,done,_ = env.step(action)
episode.append((state,reward))
if done:
break
obs = next_obs
# Sum of discounted returns
G = 0.0
# weights for return
W = 1.0
for t in range(len(episode))[::-1]:
obs,reward = episode[t]
G = discount * G + reward
# Add weights
C[obs][action] += W
# Update policy
Q[obs][action] += (W / C[obs][action]) * (G - Q[obs][action]
if action != np.argmax(target_policy(obs)):
break
W = W * 1./behavior_policy(obs)[action]
return Q,target_policy
我得到了错误
Q,policy = mc_control_importance_sampling(env,num_episodes=500000)
我不确定如何解决此问题,
谢谢
版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 dio@foxmail.com 举报,一经查实,本站将立刻删除。