I have an enabled subsystem which should run until the "subsystem time" t_subsystem has reached a specific value. I keep time in the subsystem by integrating over the constant 1 and write the value into an output port of the subsystem.
This virtual subsystem time is then used to calculate whether the subsystem should run (t_target > t_subsystem) or it should "pause" (otherwise):
If I do not use this "float comparison correction" constant, the subsystem does not reach exactly the target time, but runs for example until t_subsystem=1.075 when t_target=1.0 (using a global step size of 0.1).
1) My solution seems to work, but of course there is an algebraic loop and warnings about it. Will this work, even if the subsystem gets far more complicated (more inputs, outputs, other integrators [without algebraic loops though])? Or is there a better way to solve the problem at hand [keep a virtual time and only run the subsystem when necessary]? In the actual application, the target time t_target would not be a constant of course but get larger irregularly.
2) Is this "float comparison correction" a good idea and is it really necessary b/c of floating point errors?
[Attachment] Warnings about algebraic loop
[Attachment] If I add a Unit Delay block and adjust t_target to 0.9 (1.0 - step size), then I get either t_subsystem=0.975 (with float comparison correction) or 1.075 (without f.c.c.).
[Attachment] If I don't keep time in the subsystem itself, but on the top level, I don't have the loop but there's the difference between "actual subsystem time" and "time calculated on top level":