如何解决如何减少DQN中的情节时间?
我已经从OpenAi修改了cartpole环境,以便它以相反的位置开始并且必须学习上升。我使用Google collab来运行它,因为它比笔记本电脑上的运行速度更快。我想。太慢了……我需要40秒。一集,与笔记本电脑上的时间差不多我什至尝试针对Google TPU对其进行优化,但没有任何变化。主要的消费者是.fit()
和.predict()
,所以我相信。我在这里使用.predict()
def get_qs(self,state): return self.model.predict(np.array(state).reshape(-1,*state.shape),workers = 8,use_multiprocessing = True)[0]
这里也是.fit()
@tf.function
def train(self,terminal_state,step):
"Zum trainieren lohnt es sich immer einen größeren Satz von Daten zu nehmen um ein Overfitting zu verhindern"
if len(self.replay_memory) < MIN_REPLAY_MEMORY_SIZE:
return
# Get a minibatch of random samples from memory replay table
minibatch = random.sample(self.replay_memory,MINIBATCH_SIZE)
# Get current states from minibatch,then query NN model for Q values
current_states = np.array([transition[0] for transition in minibatch])
current_qs_list = self.model.predict(current_states)
# Get future states from minibatch,then query NN model for Q values
# When using target network,query it,otherwise main network should be queried
new_current_states = np.array([transition[3] for transition in minibatch])
future_qs_list = self.target_model.predict(new_current_states,use_multiprocessing = True)
X = []
y = []
# Now we need to enumerate our batches
for index,(current_states,action,reward,new_current_states,done) in enumerate(minibatch):
# If not a terminal state,get new q from future states,otherwise set it to 0
# almost like with Q Learning,but we use just part of equation here
if not done:
max_future_q = np.max(future_qs_list[index])
new_q = reward + DISCOUNT * max_future_q
else:
new_q = reward
# Update Q value for given state
current_qs = current_qs_list[index]
current_qs[action] = new_q
# And append to our training data
X.append(state)
y.append(current_qs)
# Fit on all samples as one batch,log only on terminal state callbacks=[self.tensorboard] if terminal_state else None
self.model.fit(np.array(X),np.array(y),batch_size=MINIBATCH_SIZE,verbose=0,shuffle=False,use_multiprocessing = True)
# Update target network counter every episode
if terminal_state:
self.target_update_counter += 1
# If counter reaches set value,update target network with weights of main network
if self.target_update_counter > UPDATE_TARGET_EVERY:
self.target_model.set_weights(self.model.get_weights())
self.target_update_counter = 0
有人可以帮我把事情搞定吗?
版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 dio@foxmail.com 举报,一经查实,本站将立刻删除。