Case-incorporated Heuristically-guided Genetic Algorithm on Dynamic JSSP


  • Abstract.  This paper describes the combination  and implementation of two new genetic-algorithm-related methods (Case-injected Genetic Algorithm and Heuristcally-guided Genetic Algorithm) on dealing with daynamic job-shop scheduling problems: case-incorporated heuristically-guided genetic algorithm. The implementation follows the model in the former method, the genetic algorithm parameters are set up from combination of both's, and the outputs (objective function and results) follow the latter method.  Promising gains are thus otained from this combination.

1. Introduction


Job Shop Scheduling Problem (JSSP) can be described as a set of different machines that perform operations on jobs: each job has has a specified processing order through machines, in other words, a job is composed of an ordered list of operations on machines with processing time spans on them. Dorndorf and Pesch gave two constraints on such problems: 1). There are no precedence constraints among operations of different jobs, and 2). operations can not be interrupted and each machine can handle only one job at a time [9]. But we would like to (strictly) define JSSP by borrowing Giffler and Thompson's classic definition of production-scheduling problem:
  1. There will be n jobs, each of which must be processed by m machines , or more accurately, by one or more of the m machines. For convenience, we use m machines. In case of fewer machines, we just consider that some of the operation times (see definition below) are zero.
  2. The processing of a job on a machine is called an operation. Operations are the elements to be scheduled.
  3. An operation, once started, may not be interrupted.
  4. Some pairs of operations may have to be performed in succession. Some must be performed in succession.
  5. Each operation is given a finite interval of time, e.g. the time required to perform the operation. 
  6. Operations for each job must be processed in a linear sequence and no job is processed twice on any machine.
  7. Each operation must be performed to completion before any operation that it precedes may be started. The time to perform operations is independent of the order in which they are performed.
The objective for solving JSSP is to find the minimal makespan (the operation time for completing all the jobs).  JSSP belongs to NP-hard problem, which can not be solved in polynomial time.  A huge amount of literature has been published to attempt to find near-optimal solutions to such kind of  hard problems in realistic time scales, and among them, genetic algorithm (GA) proves to be one of the most promising methods [2, 3, 7, 8, 9]. Dynamic JSSP is a little different from (static) JSSP in that in dynamic JSSP each job has different weight according to its nature and different arrival/due times for its specific requirements.  In most cases method for solving (static) JSSP can be applied to dynamic JSSP with some modification.

Genetic algorithms, in contrast to local search algorithms, have been considered as general (global) search strategies and optimization methods [10]. GA was developed from the early research on the theory of evolution by Holland, Rechenberg, andSchwefel [9, 10].  GA's applications on JSSP have produced large number different but close methods in the literature, methodololgy, but the most efficient one in dealing dynamic JSSP seemed to be Emma Hart's Heuristically-guided Genetic Algorithm (HGA) [3].

From other side, the concept of combining genetic algorithms and case-based reasoning, proposed by Louis, McGraw, and Wyckoff in their work of  analyzing and explaining solutions found by genetic algorithm search [11], has produced promising results in those problems which have uncertain (unknown) solutions [1, 2, 11].  The Case Injected Genetic AlgoRithm (CIGAR) system has been successfully applied in different problems including robotics control problem and JSSP and thus obtained improvements on results [1].  If CIGAR is incorporated into HGA for solving JSSP we can expect gains on the results for solving dynamic JSSPs.

In this paper, we tried to incorporate  HGA with CIGAR and to analyze the results. In the following sections we will first introduce  GA method and then describe the concept of HGA and CIGAR,  as well as methodolgy of  CIGAR-incorporated HGA. Finally, we  will analyze  the results and give conclusion and discussion.


2. Methodology

2.1 Genetic Algorithm Principles

GA has its origin in the theory of  evolution by natural selection and it aims at producing near-optimal solutions by letting a set of feasible solutions go through a sequence of transformations administered by a selection scheme toward  high-quality solutions. The feasible solutions (chromosomes) are grouped as a population,  which is updated in the process of the transformations (that is, the three operators: reproduction, crossover, and mutation, defined below): new chromosomes (children) are thus reproduced from the old chromosomes (parents) to become the members of the new population. The principle of survival of the fittest is applied in the reproduction process: parents which adapt better are more likely to be selected for producing children and thus pass "better genetic traits" to the  children.  As the new population is set up begins a new process of reproduction, and this evolutionary cycle continues from generation to generation until some stopping criteria are met.

Goldberg gave four differences that separate genetic algorithms from conventional optimization techniques[10]:
  1. Direct manipulation of a coding of the parameters (variable representations at string level), not the parameters themselves, to exploit similarities among high-performance strings.
  2. Search from a population, not a single point.
  3. Search via sampling,  that is, GAs use payoff (objective function) strategy.
  4. Search using stochastic operations, not deterministic.
Specifically, GAs can be describes as following[5] :
  1. Representation. A feasible solution (candidate) to a problem is typically represented as an ordered fixed-length array of values ("alleles") for its attributes ("genes").  The set of alleles for that gene is the set of values that the gene can possibly take. The first step in GAs is to decide how to represent candidate solutions.
  2.  Initialization.  A set (population) of candiate solutions is generated, usually, at random.
  3. Loop through the following steps until some satistactory solutions are found or some other termination criteria are met ( such as no improvement in the solution, run time due, etc.):

2.2  GAs Application in JSSP


In the Introduction section we specifically described the JSSP. Here we introduce GAs application in JSSP and emphasize on the Heuristically -guided GA (HGA). Most of the following is based on [2, 3, 5], because their work integrated each other as in the same research group.

First let us begin from an example. Following is a standard JSSP 6x6 benchmark problem.

A 6x6 benchmark problem

f1

Generally, in a JSSP, there are j jobs and m machines. And with each job there are a set of operations that must be done on different machines for different processing times in a given order. In the above example,  job 2 must first go to machine 2 for 8 unit of time, then to machine 3 for 5 unis of time, and the last operation for job 2 is on machine 4 (with 4 units of time).  The optimal schedule for the above 6x6 JSSP is:

6x6makespan


There four kinds of feasible schedules ([8] gives very strict definitions of feasible and optimal schedules) for a JSSP:
  1. inadmissible schedules: schedules with excessive idle time.
  2. semiactive schedules: schedules with no excess idle time.
  3. active schedules: schedules with no such shifts as by skipping some oprations to the front without causing other operations to start later than the original schedule.
  4. nondelay schedules: in which a machine is never kept idle if some operation is able to be processes.
The optimal schedule is not necessarily a nondelay schedule but certainly is an active schedule [5].  The relationship of the four schedules is as follows[3, 5]:

4scheduleRelationship



As an example, suppose job1 must go to mc2(machine 2) first, then mc1, mc3, and job2: mc3, mc1, mc2. We have:


4schedules

[8] gives 7 steps for calculating all the active schedules  with use of Giffler and Thompson algorithm, which can be interpreted as[3, 5]:
  1. Calculate the set C of all operations that can be scheduled next;
  2. Calculate the completion time of all operattions in C, and le mm equal the machine on which the minimum completion time t is achieved;
  3. Let G denote the conflict set of operations on machine mm, and this is the set of operations in C which take place on mm, and whose start time is less than t;
  4. Select an operation from G to schedule;
  5. Delete the chosen operation from C and return to step 1.
As for calculating non-delay schedules, we have [3]:
  1. Calculate the set C of all operations that can be scheduled next;
  2. Calculate the starting time of each operation in C and let G equal the subset of operations that can start earliest;
  3. Select an operation from G to schedule;
  4.  Delete the chosen operation from C and return to step 1.
The first and most important problem in GA application for JSSP is how to encode the problem on a chromosome. Among varieties of representations the most interesting one is "Heuristic Combination Methods (HCM)", like those in  [5,  7, 9]. An implicit representation is used in this method for a schedule: each  gene in a chromosome is a heuristic for each step of generating a schedule. In following section we introduce a new HCM proposed by Hart and Ross.

2.3  Heuristically-guided Genetic Algorithm (HGA)


HGA was proposed by Emma Hart and Peter Ross [3]. It encodes the chromosome by using a pair (Method, Heuristic). The method is used to calculate the conflicting set of schedulable operations and the above two methods (G&T's active schedule method and non-delay schedule method).  The heuristic is used to select an opration from the conflicting set. The following heuristics are used for dynamic JSSPs:



heuristics


For example, a chromosome can be represented with (m, h) where,

This representation guarantees a feasible solution and recombination operators can produce feasible offsprings.

The objectives functions used in HGA are four normalized weighted objective functions:

objectives

where,

2.4 Case-Injected Genetic AlgoRithm (CIGAR)

Case-injected GA uses the similar method as case-based reasoning (CBR) in which  the cases (old problems and solutions) helps solve  a new problem which is  similar to the old one. According to [1], a case in GA is member of the population (for example, a schedule in JSSP) together with other information such as fitness, the generation this case was generated, and its parent.  The process of using case-injected GA for a certain problem can be described as: 1). define and create similar problems, 2). build up case-base, and 3). case-injected GA search.

Our starting point is that similar problems have similar solutions. In GAs, solutions are encoded as strings and [1] shows that "purely syntactic similarity measures lead to surprisingly good performance." Hamming distance and Longest Common Subsequence (LCS) can used to express the similariy. For two sequences of strings with same length, Hamming distance between these two is the number of symbols that disagree. The less this number, the more similar the two sequences.  As in LCS, the longer the LCS the more similar the two encoded solutions.

In case of our JSSP,  for a certain JSSP, a similar problems can be generated by modifying a randomly picked set of tasks (40% of all the tasks) and changing the picked task's duration by  a randomly picked amount upto 20% of the maximum duration of a task. 50 similar problems are used to build up the case-base.

For each of the 50 similar problems, we run the HGA. In every generation during GA search, if the the fitness of the best individual increases, this new best individual will be stored in the case-base.

When dealing with the JSSP, we inject a small number of cases as part of the initial population and randomly produce the other part of the population. From then on during GA's evolution we periodically (every 5 generations in our experiment) inject into the current population some solutions (cases) that are  similar to the current best member of the GA population (using Hamming distance method as similarity metric).

It is advised to refer to [1] for specific description of CIGAR system and its application.

2.5  CIGAR-incorporated HGA

CIGAR-incorporated HGA can be considered as a specific (smply) application of CIGAR. It follows nearly all the steps proposed in [1] except that in our experiment we only used Hamming distance for measuring similarity and used 25 similar problems. HGA method replaced the GA search process.


3. Experiments and Results

We tested the CIGAR-incorporated HGA on 20x15 and 15x3 dynamic JSSPs which are:

The GA parameters used are:  population, 200; generation ,100. 25 similar problems are created for building up case-base. The method allele is mutted with probability of 0.5, and heuristic allele's is 0.01. Generational reproduction strategy with rank selection is used. The program runs for 10 times to get the final average results. The results are normalized as did in HGA [3] for comparison (at first, we produced makespans for some runs  to examine the correctness of the results).

Following tables are sampled results for the earliest 20 generations. The detailed results are in the graphs thereafter. Here in the table and graph, HGA means results from HGA method and CIGAR the results from CIGAR-incorporated HGA method. The results from HGA method were from our implementation output not from [3].


Generation
0
2
4
6
8
10
12
14
16
18
HGA
0.43199
0.39424
0.38855
0.38798
0.38798
0.38798
0.38798
0.38798
0.38798
0.38798
CIGAR
0.38392
0.37735
0.37645
0.37645
0.37645
0.37645
0.37645
0.37645
0.37645
0.37645

Table 1. EW15x3(Weighted earliest + Tardiness for 15x3 D-JSSP)




Generation
0
2
4
6
8
10
12
14
16
18
HGA
1.8271
1.7795
1.7706
1.77
1.7687
1.7687
1.7687
1.7687
1.7687
1.7687
CIGAR
1.7845
1.7677
1.7665
1.7658
1.7652
1.7652
1.7652
1.7652
1.7638
1.7638

Table 2. F15x3(Weighted Flow Time for 15x3 D-JSSP)




Generation
0
2
4
6
8
10
12
14
16
18
HGA
0.00079
-0.0391
-0.0547
-0.0576
-0.0576
-0.0576
-0.0576
-0.058
-0.058
-0.058
CIGAR
-0.0574
-0.0648
-0.0669
-0.0669
-0.0669
-0.0669
-0.0669
-0.0669
-0.0672
-0.0672

Table 3. L15x3(Weighted Lateness for 15x3 D-JSSP)




Generation
0
2
4
6
8
10
12
14
16
18
HGA
0.243
0.1957
0.1914
0.1914
0.1914
0.1914
0.1914
0.1914
0.1914
0.1914
CIGAR
0.1819
0.1771
0.1771
0.1762
0.17540
0.1754
0.1754
0.1754
0.1754
0.1754

Table 4. T15x3(Weighted Tardinessfor 15x3 D-JSSP)




Generation
0
2
4
6
8
10
12
14
16
18
HGA
0.1688
0.1501
0.1399
0.1299
0.1272
0.1263
0.1258
0.1248
0.1248
0.1248
CIGAR
0.1542
0.1403
0.1327
0.1253
0.1244
0.1243
0.1241
0.1238
0.1238
0.1238

Table 5. EW20x15(Weighted earliest + Tardiness for 20x15 D-JSSP)




Generation
0
2
4
6
8
10
12
14
16
18
HGA
1.4063
1.3916
1.3837
1.3775
1.3766
1.3741
1.3739
1.3736
1.3734
1.3731
CIGAR
1.3895
1.3732
1.3682
1.3624
1.3603
1.3584
1.3581
1.357
1.357
1.357

Table 6. F20x15(Weighted Flow Time for 20x15 D-JSSP)




Generation
0
2
4
6
8
10
12
14
16
18
HGA
0.2573
0.0064
-0.0105
-0.0214
-0.0238
-0.0267
-0.0267
-0.0267
-0.0267
-0.0267
CIGAR
-0.0036
-0.0202
-0.029
-0.0303
-0.0322
-0.0328
-0.0332
-0.0332
-0.0332
-0.0332

Table 7. L20x15(Weighted Lateness for 20x15 D-JSSP)




Generation
0
2
4
6
8
10
12
14
16
18
HGA
0.1051
0.0918
0.0873
0.0816
0.0795
0.079
0.079
0.079
0.079
0.079
CIGAR
0.1011
0.0942
0.0855
0.0831
0.0814
0.0782
0.0779
0.0779
0.0779
0.0779

Table 8. T20x15(Weighted Tardinessfor 20x15 D-JSSP)





Following are the result graphs. The legends in the diagrams are self-explained:


For example, in the following diagram,









we interpret it as: for 15x3 D-JSSP, their average performance in 100 generations. The objective function is "F"---the normalized minimum flow time (in every generation).

3.1   15x3 D-JSSP




Fig. 1.  HGA / CIGAR performance on EW15x3





Fig. 2. HGA / CIGAR performance on  F15x3.

 



Fig. 3. HGA / CIGAR performance on  L15x3.





Fig. 4. HGA / CIGARperformance on  T15x3.


 


Fig. 5. CIGAR vs HGA (average) performance on EW15x3 (left) and F15x3 (right).





Fig. 6. CIGAR vs HGA (average) performance on L15x3 (left) and T15x3 (right).


3.2  20x15 D-JSSP



Fig. 7. HGA / CIGAR performance on EW20x15.

 



Fig. 8. HGA  vs CIGAR  performance on EW20x15 (best/left,  worst-average/right).





Fig.9. HGA / CIGAR  performance on F20x15.





Fig. 10. HGA vs CIGAR performance on F20x15 (best/left, worst-average/right).

 



Fig. 11. HGA / CIGAR performance on L20x15.





Fig. 12. HGA vs CIGAR performance on L20x15 (best/left, worst-average/right).





Fig. 13.  HGA / CIGAR performance on T20x15.





Fig. 14.  HGA vs CIGAR performance on T20x15 (best/left, worst-average/right).              




The results demonstrate that CIGAR-incorporated HGA beats HGA in the  all the best objectives  and most of the average cases, but in the worst objectives in the population, the CIGAR-incorporated HGA not. This can be explained that during case injection process, we always injected one or more cases into the population, even though after a certain number of generations when nearly all the members of the population have a relatively good objectives and when they are converging together. How to inject cases into population and how to define the similarity between problems is a challenging topic that needs our further consideration. Anyway, our purpose is to find the makespan in the best cases not in the worst cases.


It has been noted that the best (minimized) objective is not necessarily the least makespan. It is not unreasonable because the nature of dynamic JSSP determines the objective function. For example, for a job that needs to be finished as soon as possible after its arrival (and usually, with much higher weightness), it  generally has a little larger makespan than its corresponding static JSSP (with same job-machine time requirement and without the arrival, due time and weight specifics).



4 Conclusion and Future work

In this paper, we implemented Case-incorporated HGA on dynamic JSSP and got some promising results. It demonstrates that the Case-injected Genetic Algorthms, as a general methodolgy in evolutionary computation, can produce beneficial results on some hard-known problems. HGA is a very efficient method for dynamic JSSP and Case-incorporated HGA still obtained some gains on it. On the other side, as [1] proposed, defining and creating similar problems is a challenging subject. How to build up more flexisble and efficient  similar problems in the case of dynamic JSSP should be considered in the future work: for example, what if the arrival/due time and weight change? Another one that can be improved is that the objective functions can be chosen to some extent according to the nature of the real dynamic JSSP. For example, a JSSP does not care about saving up a few makespan time (1 % of the makespan, for instance) but seriously emphasized some particular jobs in the JSSP earliest completion after their arrival.





References:

  1. Sushil Louis:  Category: Evolutionary Scheduling Learning to Improve Genetic Algorithm Performance on Job Shop Scheduling.  
  2. Hsiao-Lan Fang, Peter Ross, and Dave Corne: A Promising Genetic Alforithm Approach to Job-Shop Scheduling, Rescheduling, and Open-Shop Scheduling Problems.  Proceedings of the Fifth International Conference on Genetic Alforithms, S. Forrest (ed.) San Mateo, Morgan Kaufmann, 1993, pp 375-382.
  3. Emma Hart, Peter Ross:  A Heuristic combination Method for Solving Job-Shop Scheduling Problems.
  4. Gilbert Syswerda, Jeff PalmucciThe Application of Genetic Algorithms to Resource Scheduling.
  5. Hsiao-Lan Fang:  Genetic Algorithms in Timetabling and Scheduling.  Ph. D dissertation
  6. Y. Song, et al:  A Genetic Algorithm with an incomplete representation for the JobShop Scheduling Problems. 
  7. Igor P. Norenkov, Erik D. Goodman:  Solving Scheduling Problems via Evolutionary Methods for Rule Sequence Optimization.
  8. G. Giffler and G.L. Thompson: Algorithms for Solving Production-Scheduling Problems.  Operations Research,  Vol. 8, Iss. 4 (Jul. - Aug., 1960), pp 487-503.
  9. Ulrich Dorndorf and Erwin Pesch: evolution Based Learning in a Job Shop Scheduling Environment.  Computers and Operations Research, Vol. 22, No. 1, pp 25-40, 1995.
  10. D. E. Goldberg:  Genetic Algorithms: in Search, Optimization and Machine Learning. Addison-Wesley Publishing Company, 1989.
  11. S. Louis,  G. McGraw, and R. Wyckoff:  Cased-based reasoning assisted explanaton of genetic algorithm results. Journal of Experimental and Theoretical Artificial Intelligence. 5:21-37, 1993.