iteration/resource limits on CONOPT
Could we release these limits on CONOPT?
I'm mainly interested in reslim
for Nash, because the REMIND-EU calibration has shown to need more time to find a feasible solution (at least three and a half hours). I don't think limiting reslim
is useful (anymore).
- Most runs solve faster, so this limit isn't reached very often (aka does not matter).
- If CONOPT does not find a solution, I would like it to keep trying. It is likely to stop at some point because the gradient gets too flat or it hits a
NA
. But if it doesn't, having it keep going may give me a feasible (unconverged) .gdx from which to start a new run. (Worked for me a lot for initial calibrations of new parametrisations.) - When CONOPT stops because of a time-out, it is not likely to find a feasible solution within two hours in the next
solve
iteration or even Nash iteration, but will just waste eleven iterations trying again from the beginning, without informing the user who could then intervene. - Runs are limited by slurm in any case, so we don't run the risk of wasting more cluster resources if this is dropped.
The non-Nash reslim
(eleven days) isn't really different from the default (317 years) – most runs will get cancelled well before, and if not (Negishi?), users are likely to check in on them.
Same argument for iterlim
. 1e6 is a random number that is not qualitatively different from the default (2e9).