Posts
15
Comments
31
Trackbacks
0
Schedule constraints

Ben Fulton

In Part 1 we set up a very simple scheduling model.

In a more realistic schedule, of course, some jobs have to wait until other jobs are finished.  The guy putting the doors on the car can’t get started on his job until the person putting in the engine is finished.   Let’s set up a class modelling a constraint that says a job can’t start until another job is finished.

    public class UniformConstraint

    {

        public readonly Job BlockingJob;

Sometimes also there’s a lag between the first job finishing and the second job starting.  That can be expressed as an amount of time, but it can also be expressed in iterations – say job 2 can’t start until job 1 has run twice.  We’ll keep track of both of those.

        public readonly int Latency;

        public readonly int Iterations;

We’ll add a list of constraints to our Job class.  Then we’ll need to determine, for a given time and iteration, is a given constraint blocking a job from running?

        public bool Blocker(int iteration, int time)

        {

            int triggerIteration = iteration + Iterations;

            if (triggerIteration >= BlockingJob.CompletedIterations())

                return true;

 

            return BlockingJob.StartTime(triggerIteration) + BlockingJob.processingTime + Latency > time;

        }

 

(There’s a constructor too, but that pretty much rounds out the Constraint class.  Now over to our Job class.  For a job and a specific time, we’ll need to know if a job is blocked from running or not:

        public bool Blocked(int time)

        {

            return Blockers.Any(constraint => constraint.Blocker(startingTimes.Count, time));

        }

 

Given all this, we’ll have to rewrite our processor.  We’ll just tick the clock, and at each step, we’ll start any jobs that are free to run.  (There are certainly more efficient ways to do this, but that’s ok.)

        public void Process(int numIterations)

        {

            int time = 0;

            while (!IterationComplete(numIterations))

            {

                foreach (var job in Unblocked(time).Where(job => job.IsIdle(time)))

                {

                    job.Start(time);

                }

 

                time++;

            }

        }

That’s more or less it.  Here’s a simple test.  It shows that if our second job is blocked by our first job, the second job is blocked at time 2, but it’s free to run again at time 7 when job 1 is complete.

        [Fact]

        public void Constraint_blocks_job2_until_job1_completes()

        {

            _jobs[1].Constrain( new UniformConstraint( _jobs[0], 0, 0 ));

            _jobs[0].Start(0);

            Assert.True(_jobs[1].Blocked(2));

            Assert.False(_jobs[1].Blocked(7));

        }

Here’s a more complex example:  A part is manufactured successively in two shops.  The first shop can handle at most two parts at the same time, while the second can handle three parts.  The first shop has three machines with processing times of 1,2, and 2.  The second shop has two machines with processing times 2 and 3.  Finally, a part takes a time of 6 to be sent from the first shop to the second.

Here’s a test for that.

        [Fact]

        public void Complex_example()

        {

            _jobs = new List<Job>

            {

                new Job{ processingTime = 1 },

                new Job{ processingTime = 2 },

                new Job{ processingTime = 2 },

                new Job{ processingTime = 2 },

                new Job{ processingTime = 3 }

            };

 

            _jobs[2].Constrain(new UniformConstraint(_jobs[0], 0, 2));

            _jobs[4].Constrain(new UniformConstraint(_jobs[3], 0, 3));

            _jobs[1].Constrain(new UniformConstraint(_jobs[0], 0, 0));

            _jobs[2].Constrain(new UniformConstraint(_jobs[1], 0, 0));

            _jobs[3].Constrain(new UniformConstraint(_jobs[2], 6, 0));

            _jobs[4].Constrain(new UniformConstraint(_jobs[3], 0, 0));

 

            var sched = new CyclicSchedule(_jobs);

            sched.Process(3);

            Assert.Equal(22m, sched.IterationCompletionTime(0));

            Assert.Equal(24m, sched.IterationCompletionTime(1));

            Assert.Equal(26m, sched.IterationCompletionTime(2));

        }

Interesting that this line has a startup time of 20, but after that it pumps out parts every 2 ticks.  Pretty efficient!

One other thing I’d like to note – it’s perfectly valid to have a negative value for a latency.  This would indicate that, for example, job 3 can start no later than 10 ticks after job 1.  This isn’t relevant to our processing model but leads to some interesting restrictions on the schedules.  We’ll look at that more next time.   This code is available on GitHub.

posted on Thursday, September 9, 2010 10:03 PM Print
test