Abstract
This paper proposes an interactive multiobjective evolutionary algorithm (MOEA) that attempts to learn a value function capturing the users' true preferences. At regular intervals, the user is asked to rank a single pair of solutions. This information is used to update the algorithm's internal value function model, and the model is used in subsequent generations to rank solutions incomparable according to dominance. This speeds up evolution toward the region of the Pareto front that is most desirable to the user. We take into account the most general additive value function as a preference model and we empirically compare different ways to identify the value function that seems to be the most representative with respect to the given preference information, different types of user preferences, and different ways to use the learned value function in the MOEA. Results on a number of different scenarios suggest that the proposed algorithm works well over a range of benchmark problems and types of user preferences.
Original language | English |
---|---|
Pages (from-to) | 88-102 |
Number of pages | 15 |
Journal | IEEE Transactions on Evolutionary Computation |
Volume | 19 |
Issue number | 1 |
Early online date | 30 Jan 2014 |
DOIs | |
Publication status | Published - 1 Feb 2015 |
Keywords
- optimization
- additives
- educational institutions
- linear programming
- electronic mail
- computational modeling
- business