The ICM in the DLM algorithm

  • Authors:
  • Yousef Kilani;Abdullah Mohd Zin

  • Affiliations:
  • Prince Hussein bin Abdullah Information Technology College, Al al-Bayt University, Jordan;Faculty of Information Science and Technology, Universiti Kebangsaan, Malaysia

  • Venue:
  • AIKED'05 Proceedings of the 4th WSEAS International Conference on Artificial Intelligence, Knowledge Engineering Data Bases
  • Year:
  • 2005

Quantified Score

Hi-index 0.00

Visualization

Abstract

Local search methods for solving constraint satisfaction problems such as GSAT, WalkSAT and DLM starts the search for a solution from a random assignment. Local search then examines the neighbours of this assignment, using the penalty function to determine a better neighbour valuations to move to. It repeats this process until it finds a solution which satisfies all constraints. ICM [8] considers some of the constraints as hard constraints that are always satisfied. In this way, the constraints reduce the possible neighbours in each move and hence the overall search space. We choose the hard constraints in such away that the space of valuations that satisfies these constraints is connected in order to guarantee that a local search can reach a solution from any valuation in this space. We show in this paper how incorporating learning in the island traps and restart improves the DLMI algorithm [8].