NetOpt2020 2020 01 22 Mario Exercises
NetOpt2020 2020 01 22 Mario Exercises
where Ui,t−1 is the set of neighbors which are active in time step t − 1. Therefore, a node’s state
might go from active to inactive again if the hurdle increases or the set of active neighbors decreases
in a later time step.
We now state the influence maximization problem (slide 14) based on such a non-progressive LTM.
The given objective function of maximizing the number of active nodes does not make sense anymore
in the non-progressive context. Instead we want to maximize the sum of active time steps over all
nodes within time horizon T . Formulate this problem as an MILP with time-indexed variables (slide
43).