Mostrar el registro sencillo del ítem

dc.contributorEscuela de Ingenierias Industrial, Informática y Aeroespaciales_ES
dc.contributor.authorFernández, Fernando
dc.contributor.authorBorrajo, Daniel
dc.contributor.authorMatellán Olivera, Vicente 
dc.contributor.otherArquitectura y Tecnologia de Computadoreses_ES
dc.date1999-09-10
dc.date.accessioned2012-10-18T12:20:09Z
dc.date.available2012-10-18T12:20:09Z
dc.date.issued2012-10-18
dc.identifier.citationEuropean Conference on Planning, Septiembre, 1999, Durham, Reino Unidoes_ES
dc.identifier.urihttp://hdl.handle.net/10612/1921
dc.description.abstractReinforcement learning har proven to be very successful for finding optimal policies on uncertian and/or dynamic domains. One of the problems on using such techniques appears with large state and action spaces. This problem appears very frequently given that most information in the type of tasks to which these techniques have been applied is continuous. In the paper, we describe a new mechanism for solving the states generalization problem in reinforcement learning algorithms, the VQQL techniquees_ES
dc.languageenges_ES
dc.subjectInformáticaes_ES
dc.subject.otherConocimientoes_ES
dc.subject.otherRobóticaes_ES
dc.subject.otherVQQLes_ES
dc.subject.otherAprendizajees_ES
dc.titleVQQL: a model to generalize in reinforcement learninges_ES
dc.typeinfo:eu-repo/semantics/conferenceObjectes_ES
dc.type.otherinfo:eu-repo/semantics/lecturees_ES
dc.rights.accessRightsinfo:eu-repo/semantics/openAccesses_ES


Ficheros en el ítem

Thumbnail

Este ítem aparece en la(s) siguiente(s) colección(ones)

Mostrar el registro sencillo del ítem