I am using MATLAB r2020a where rlRepresentation is "not recommended." As a result, I am forced to substitute it with the criritics or actors in the following compatibility guide (https://www.mathworks.com/help/reinforcement-learning/ref/rlrepresentation.html#mw_a6277225-fecf-4d97-9549-1fc4799bf5b6). I tried replacing rlRepresentation with rlValueRepresentation, rlQValueRepresentation, rlDeterministicActorRepresentation, and rlStochasticActorRepresentation (though I left rlRepresentationOptions as is where it came up). They all resulted in errors, and rlValueRepresentation and rlStochasticActorRepresentation had the fewest (and the same) errors:
Error using rlStochasticActorRepresentation (line 93)
Too many input arguments.
Error in createDDPGNetworks (line 51)
critic = rlStochasticActorRepresentation (criticNetwork,criticOptions, ...
Since both this critic and actor have the same error, I think it might have something to do with rlRepresentationOptions since it gives properties to the actors or critics (as far as I understand).
Any help is appreciated.