Mostrar el registro sencillo del recurso

dc.coverage.spatialInvestigación aplicada
dc.creatorVICTOR EMANUEL DE ATOCHA UC CETINA
dc.date2013-04-01
dc.date.accessioned2018-10-04T15:08:16Z
dc.date.available2018-10-04T15:08:16Z
dc.identifierhttp://dx.doi.org/10.1155/2013/492852
dc.identifier.urihttp://redi.uady.mx:8080/handle/123456789/771
dc.description.abstractWe introduce a reinforcement learning architecture designed for problems with an infinite number of states, where each state can be seen as a vector of real numbers and with a finite number of actions, where each action requires a vector of real numbers as parameters. The main objective of this architecture is to distribute in two actors the work required to learn the final policy. One actor decides what action must be performed; meanwhile, a second actor determines the right parameters for the selected action.We tested our architecture and one algorithm based on it solving the robot dribbling problem, a challenging robot control problem taken from the RoboCup competitions. Our experimental work with three different function approximators provides enough evidence to prove that the proposed architecture can be used to implement fast, robust, and reliable reinforcement learning algorithms.
dc.languageeng
dc.publisherAdvances in Artificial Intelligence
dc.relationcitation:0
dc.rightsinfo:eu-repo/semantics/openAccess
dc.rightshttp://creativecommons.org/licenses/by-nc-nd/4.0
dc.sourceurn:issn:1687-7489
dc.subjectinfo:eu-repo/classification/cti/7
dc.subjectINGENIERÍA Y TECNOLOGÍA
dc.titleA novel reinforcement learning architecture for continuous state and action spaces
dc.typeinfo:eu-repo/semantics/article


Archivos en el recurso

Thumbnail

Este recurso aparece en la(s) siguiente(s) colección(ones)

Mostrar el registro sencillo del recurso