In the following we discuss several examples of move-based (as opposed to constructive search) methods. These methods have originally been developed for unconstrained problems, but they work for certain classes of constrained problems as well.
From a technical point of view, the main difference between tree search and move-based search is that tree search is monotonic in the sense that constraints get tightened when going down the tree, and this is undone in reverse order when backing up the tree to a parent node. This fits well with the idea of constraint propagation. In a move-based search, the main characteristics is that a move produces a small change, but it is not clear what effect this will have on the constraints. They may become more or less satisfied. We therefore need implementations of the constraints that monitor changes rather than propagate instantiations. This functionality is provided by the ECLiPSe repair library which is used in the following examples. The repair library is decribed in more detail in the ECLiPSe Library Manual. The ECLiPSe code for all the examples in this section is available in the file knapsack_ls.ecl in the doc/examples directory of your ECLiPSe installation.
We will demonstrate the local search methods using the well-known knapsack problem. The problem is the following: given a container of a given capacity and a set of items with given weights and profit values, find out which items have to be packed into the container such that their weights do not exceed the container's capacity and the sum of their profits is maximal.
The model for this problem involves N boolean variables, a single inequality constraint to ensure the capacity restriction, and an equality to define the objective function.
The tree search program for this problem looks as follows:
:- lib(fd).At the end of the problem modelling code, a standard branch-and-bound tree search (min_max) is invoked in the last line of the code. The parameters mean
knapsack(N, Profits, Weights, Capacity, Profit) :-
length(Vars, N), % N boolean variables
Vars :: 0..1,
Capacity #>= Weights*Vars, % the single constraint
Profit #= Profits*Vars, % the objective
min_max(labeling(Vars), -Profit). % branch-and-bound search
:- lib(fd).We are now using 3 features from the repair-library:
:- lib(repair).
knapsack(N, Profits, Weights, Capacity, Opt) :-
length(Vars, N),
Vars :: 0..1,
Capacity #>= Weights*Vars r_conflict cap,
Profit tent_is Profits*Vars,
local_search(<extra parameters>, Vars, Profit, Opt).
In the literature, e.g. in
Localizer: A Modeling Language for Local Search, L. Michel and P. Van Hentenryck, Proceeding CP97, LNCS 1330, Springer 1997.local search methods are often characterised by the the following nested-loop program schema:
local_search:The actual program codes in the following sections all follow this schema, except that some methods (random walk and the tabu search) are even simpler and use only a single loop with a single termination condition.
set starting state
while global_condition
while local_condition
select a move
if acceptable
do the move
if new optimum
remember it
endwhile
set restart state
endwhile
As a simple example of local search, let us look at a random walk strategy. The idea is to start from a random tentative assignment of variables to 0 (item not in knapsack) or 1 (item in knapsack), then to remove random items (changing 1 to 0) if the knapsack's capacity is exceeded and to add random items (changing 0 to 1) if there is capacity left. We do a fixed number (MaxIter) of such steps and keep track of the best solution encountered.
Each step consists of
Here is the ECLiPSe program. We assume that the problem has been set up as explained above. The violation of the capacity constraint is checked by looking at the conflict constraints. If there are no conflict constraints, the constraints are all tentatively satisfied and the current tentative values form a solution to the problem. The associated profit is obtained by looking at the tentative value of the Profit variable (which is being constantly updated by tent_is).
random_walk(MaxIter, VarArr, Profit, Opt) :-The auxiliary predicate init_tent_values sets the tentative values of all variables in the array randomly to 0 or 1: The change_random predicate changes a randomly selected variable with a tentative value of 0 to 1, or vice versa. Note that we are using an array, rather than a list of variables, to provide more convenient random access. The complete code and the auxiliary predicate definitions can be found in the file knapsack_ls.ecl in the doc/examples directory of your ECLiPSe installation.
init_tent_values(VarArr, random), % starting point
( for(_,1,MaxIter), % do MaxIter steps
fromto(0, Best, NewBest, Opt), % track the optimum
param(Profit,VarArr)
do
( conflict_constraints(cap,[]) -> % it's a solution!
Profit tent_get CurrentProfit, % what is its profit?
(
CurrentProfit > Best % new optimum?
->
printf("Found solution with profit %w%n", [CurrentProfit]),
NewBest=CurrentProfit % yes, remember it
;
NewBest=Best % no, ignore
),
change_random(VarArr, 0, 1) % add another item
;
NewBest=Best,
change_random(VarArr, 1, 0) % remove an item
)
).
The following hill-climbing implementation is an instance of the nested loop program schema introduced above. The idea is to start from a configuration which is certainly a solution (the empty knapsack) and do random uphill moves for at most MaxIter times. Then we restart and try again:
hill_climb(MaxTries, MaxIter, VarArr, Profit, Opt) :-The move operator is implemented as follows. It chooses a random variable X from the array of variables and changes its tentative value from 0 to 1 or from 1 to 0 respectively:
init_tent_values(VarArr, 0), % starting solution
(
for(I,1,MaxTries),
fromto(0, Opt1, Opt4, Opt),
param(MaxIter,Profit,VarArr)
do
(
for(J,1,MaxIter),
fromto(Opt1, Opt2, Opt3, Opt4),
param(I,VarArr,Profit)
do
Profit tent_get PrevProfit,
(
flip_random(VarArr), % try a move
Profit tent_get CurrentProfit,
CurrentProfit > PrevProfit, % is it uphill?
conflict_constraints(cap,[]) % is it a solution?
->
( CurrentProfit > Opt2 -> % is it new optimum?
printf("Found solution with profit %w%n",
[CurrentProfit]),
Opt3=CurrentProfit % accept and remember
;
Opt3=Opt2 % accept
)
;
Opt3=Opt2 % reject (move undone)
)
),
init_tent_values(VarArr, 0) % restart
).
flip_random(VarArr) :-Some further points are worth noticing:
functor(VarArr, _, N),
X is VarArr[random mod N + 1],
X tent_get Old,
New is 1-Old,
X tent_set New.
Simulated Annealing is a slightly more complex variant of local search.
It follows the schema in figure and uses the same
move operator as the hill-climbing example.
The differences are in the termination conditions and in the
acceptance criterion for a move.
The outer loop simulates the cooling process by reducing the temperature
variable T, the inner loop does random moves until MaxIter steps have been
done without improvement of the objective.
The acceptance criterion is the classical one for simulated annealing:
Uphill moves are always accepted, downhill moves with a probability
that decreases with the temperature. The search routine must be invoked
with appropriate start and end temperatures, they should roughly correspond
to the maximum and minimum profit changes that a move can incur.
sim_anneal(Tinit, Tend, MaxIter, VarArr, Profit, Opt) :-
starting_solution(VarArr), % starting solution
( fromto(Tinit, T, Tnext, Tend),
fromto(0, Opt1, Opt4, Opt),
param(MaxIter,Profit,VarArr,Tend)
do
printf("Temperature is %d%n", [T]),
( fromto(MaxIter, J0, J1, 0),
fromto(Opt1, Opt2, Opt3, Opt4),
param(VarArr,Profit,T)
do
Profit tent_get PrevProfit,
( flip_random(VarArr), % try a move
Profit tent_get CurrentProfit,
exp((CurrentProfit-PrevProfit)/T) > frandom,
conflict_constraints(cap,[]) % is it a solution?
->
( CurrentProfit > Opt2 -> % is it new optimum?
printf("Found solution with profit %w%n",
[CurrentProfit]),
Opt3=CurrentProfit, % accept and remember
J1=J0
; CurrentProfit > PrevProfit ->
Opt3=Opt2, J1=J0 % accept
;
Opt3=Opt2, J1 is J0-1 % accept
)
;
Opt3=Opt2, J1 is J0-1 % reject
)
),
Tnext is max(fix(0.8*T),Tend)
).
In the following simple example, the tabu list has a length determined by the parameter TabuSize. The local moves consist of either adding the item with the best relative profit into the knapsack, or removing the worst one from the knapsack. In both cases, the move gets rememebered in the fixed-size tabu list, and the complementary move is forbidden for the next TabuSize moves.
tabu_search(TabuSize, MaxIter, VarArr, Profit, Opt) :-
starting_solution(VarArr), % starting solution
tabu_init(TabuSize, none, Tabu0),
( fromto(MaxIter, I0, I1, 0),
fromto(Tabu0, Tabu1, Tabu2, _),
fromto(0, Opt1, Opt2, Opt),
param(VarArr,Profit)
do
( try_set_best(VarArr, MoveId), % try uphill move
conflict_constraints(cap,[]), % is it a solution?
tabu_add(MoveId, Tabu1, Tabu2) % is it allowed?
->
Profit tent_get CurrentProfit,
( CurrentProfit > Opt1 -> % is it new optimum?
printf("Found solution with profit %w%n", [CurrentProfit]),
Opt2=CurrentProfit % accept and remember
;
Opt2=Opt1 % accept
),
I1 is I0-1
;
( try_clear_worst(VarArr, MoveId), % try downhill move
tabu_add(MoveId, Tabu1, Tabu2) % is it allowed?
->
I1 is I0-1,
Opt2=Opt1 % reject
;
I1=0, % no moves possible, stop
Opt2=Opt1 % reject
)
)
).
In practice, the tabu search forms only a skeleton around which a complex search algorithm is built. An example of this is applying tabu search to the job-shop problem, as described by Nowicki and Smutnicki (A Fast Taboo Search Algorithm for the Job Shop Problem, Management Science/Vol. 42, No. 6, June 1996).