Jump to content

Moment of schedule and capacity decrease for busy resource


averbraeck
 Share

Recommended Posts

When does a resource change capacity when this is requested through a schedule or through an assignment of the CurrentCapacity? When studying the behavior, we deduced the following, but want to check that it is correct (the documentation did not give much information):

1. When the resource is busy (say, fully occupied with a capacity of 5), and the capacity is decreased to 1 or more, all entities that are in process finish their work as scheduled. The effective change apparently takes place after busy entities are finished. (in Arena, this was either the IGNORE or the WAIT behavior).

2. When the resource is busy (say, fully occupied with a capacity of 5), and the capacity is decreased to 0, it immediately stops processing the current entities, which will stay in the Processing queue. When capacity is increased again to 5, the entities are finished. (in Arena, this was the PREEMPT behavior).

3. When the resource is busy (say, fully occupied with a capacity of 5), and the capacity is decreased to 0, it immediately stops processing the current entities. When capacity is increased again to, e.g., 1, all entities seem to finish at the same time. This is strange, because one would expect the entities to be finished one-by-one. This could be considered a bug?

Finally, is it possible to implement IGNORE (schedule change starts late if entity busy, but next schedule change remains unchanged), WAIT (schedule change starts late if entity busy, and next schedule change is shifted accordingly) and PREEMPT (schedule change starts immediately) behaviors in Simio?

-- Alexander Verbraeck, TU Delft, Netherlands

Link to comment
Share on other sites

Regarding #3, it is correct that by default when a Server goes from off-shift to on-shift that all entities which have already been allocated Server capacity will simply continue processing (even if the scheduled capacity is less than the entities already using the Server. The server's scheduled utilization will be greater than 100 during the time period when it is working over scheduled capacity).


Similarly, suppose you have a Server with scheduled capacity of 5 that has 5 entities currently processing on the Server, and the scheduled capacity is decreased from 5 to 4, that capacity decrease does not by default suspend the processing of one of the entities on the server. All 5 entities continue processing and the server's scheduled utilization will be greater than 100 during the time period until at least one of the entities finishes and releases the server.


If it is important for you in a model to never have a server being utilized above scheduled capacity while on-shift, then you might add some Interrupt step-related process logic to your model that essentially kicks entities off the server whenever capacity is decreased. The interruption logic will make the entities release the server capacity, store the remaining processing time, and transfer the entities from the processing station back into the input buffer to have to re-seize. You would have control over selection of which entities are desired to get kicked off and of course if having to re-seize the Server then the server's allocation ranking and selection rules would be applicable.


The SimBit 'InterruptingServerWithMultipleCapacity.spfx' might be looked at to see an example of interrupting entities on a server, saving the remaining processing time, and then transferring the entities back into the Input Buffer of the Server to have to re-seize capacity in order to continue processing.

Link to comment
Share on other sites

Glenn, Dave, Thanks for the replies. Totally clear, and IMHO, very important to state this CLEARLY and OFTEN in the Simio documentation, where this seems to be totally missing. I had a few models where processing times were quite long, and there were lunch breaks and times to go home for the employees (resources) through schedules, but the results were not valid. I now understand that most employees did probably not take lunch at all and went home late.


It would be ideal if "preemptive" schedule changes and "wait" behavior for breaks could be implemented as standard options as well when building a schedule. E.g., indicating that the schedule capacity needs change immediately (preemptive behavior), needs to finish the current job(s) (ignore -> current behavior), or needs to shift the next capacity change for the duration of finishing the current job(s) (wait behavior, but I realize that this is difficult to implement in a consistent manner).


Alexander.

Link to comment
Share on other sites

Regarding item #3: This is behavior that no one will understand. Suppose a models has a large volume of jobs with a duration of 2 hours or more. In the late afternoon, 10 people work on these jobs. When they all go home at 6, work stops (so, going off-shift completely is preemptive behavior, while the model behavior is completely different when I change the capacity from 10 to 1...). When 1 person comes in in the morning at 7, he picks up all 10 remaining jobs and finishes them in parallel... I would consider that to be a bug.


Alexander.

Link to comment
Share on other sites

First of all, I won’t argue with you at all – I would also like to see Simio have more intuitive behavior in our Standard Library objects. But perhaps my explanation below will make it a little bit more intuitive.


But the good news is that, unlike other products that limit the behavior to one of a few predefined choices:

1) In Simio your choices are unlimited - it is not too difficult to make it behave exactly as you want. The OnCapacityChanged add-on process (or related processes like On Shift and Off Shift) can use the Interrupt, Suspend, and Resume Steps as illustrated in several SimBits.

2) If you want some behavior to be your default behavior, just create a custom object with that behavior.


If you think about the built-in behavior as follows it might help…


PARADIGM: A resource of capacity > 1 still represents a single resource, but just has the ability to process multiple concurrent entities. But it still only has a single state (Idle, Busy, Failed, …), e.g. if it is failed, the entire resource is failed (all capacity units), and if it is off-shift (e.g. capacity=0) all units are off-shift.


REDUCING CAPACITY: When capacity available is reduced below the current number of busy units, there are many possible valid behaviors depending on your system. The default behavior of our standard library is simply to finish working on the current entity, and then take the unit off-line. But you can use the tools mentioned above to change that default. For example, you might implement some behavior that says “if I am within 10 minutes of completion, go ahead and complete it, otherwise record its remaining time, add 5 minutes of restart time, then put it at the beginning of the waiting entities”.


OFF-SHIFT: When a resource is taken off-shift (capacity=0) all units are off-shift. Current Standard Library behavior causes all processes to be suspended (Suspend/Resume Steps) while it is off-shift. This essentially means that everything is “frozen” in place. The entities are not removed from the resource, but rather just suspended so no progress is made during the off-shift period. When the off-shift period is complete (e.g. capacity > 0) then all the entities resume exactly where they left off.


Unfortunately that last phrase is what causes the unexpected behavior when you suspend at one capacity and then resume at another lower capacity. You are essentially combining the two above paragraphs. You suspend all activity while off-shift, then after you resume, you will behave as though you had just reduced the units in service, which under the default behavior says keep working on the entities in process until they are completed.


In your case, it sounds like you would want to add content to the On Shift process so that if capacity is set to less than the number busy, that you would want to Interrupt the remaining entities and send them back to the entry queue. And then perhaps build that behavior into a custom Server object and use that object routinely.

  • Like 1
Link to comment
Share on other sites

Dave, thanks for the explanation. This makes it (especially the paradigm) a lot more clear. If some of this can be included in the standard description of a resource and schedules in the manual and in the help files, I believe it will be very beneficial to a lot of people! A SimBit to demonstrate the capacity reducing example you outline below, would also be very helpful for many (I will for sure build that example for my classes -- it will answer a lot of questions I routinely get).


Finally, you are absolutely right that being able to extend the standard objects and change their behavior is easy and helps to address many issues. I can create a 'preemptive server' that way in a matter of minutes.


Thanks again! Alexander.

Link to comment
Share on other sites

Multi-capacity preemption with the Standard Library Server is a tricky topic.


When using capacity schedules, the Server as currently designed works most naturally if the on-shift capacity is a constant. For example, the capacity goes from 1 to 0 and then back to 1, or from 10 to 0 and then back to 10 and so forth.


When the Server's capacity goes to 0, it goes into an 'Offshift' state and the processing logic of all entities that have been allocated Server capacity and are located in the Server's 'Processing' station get suspended. That seems fine, though we could also add an option at some point which allows any current entities to finish processing while the Server is in an 'OffShiftProcessing' state (i.e., the Server works overtime to finish any current WIP), but we have not put that sort of behavior option in yet. But that is certainly doable and has been an idea considered before.


When the Server goes back into the on-shift 'Processing' state (which means it is processing at least one entity), then all entities in the Server's 'Processing' station resume their processing delay times.


The Server comes back on-shift with a capacity less than the number of entities already in-process


The current behavior is as mentioned above. The Server just simply resumes all processing. We've discussed before in previous years trying to do something more fancy here, but trying to only partially resume processing would be much more complicated logic. Let's say 10 entities are processing but the capacity is only 1. Which of the WIP entities is the lucky one that is selected to resume processing? Then the 9 entities that are not the chosen one, presumably they would have to be Interrupted and then release the Server capacity that they hold? Because they then have to wait to re-Seize the Server capacity until the single entity finishes processing and releases capacity (thus allowing the next entity to re-seize)? But those 9 entities may be expected to wait in the Server's Processing station? And if so, then you might have to somehow make sure that no new entities who have yet to ever start processing (e.g., entities waiting in the Server's Input Buffer or waiting outside the Server at its 'Input' node if there is no input buffer) can seize the server before the interrupted guys? So you may have to put in some layered allocation rule scheme whereby new entities waiting in the input buffer are a lower priority to seize Server capacity than entities already in the Processing station waiting to re-seize Server capacity)?


Or maybe you just Interrupt everybody and stick them in the Input Buffer of the Server and let the specified ranking rule/dynamic selection rule specified on the Server sort them all out? And if it turns out that the next entity who gets the Server capacity was not even WIP on the server when it went off-shift but was a guy who arrived during the off-shift period, then so be it.


And so forth. It can be a bit complicated. We've always punted in the past on this topic because of the issues involved, though one of the reasons that we did add the Interrupt step was to give users a chance to customize a Server if they needed to go down this sort of road (as Dave Sturrock mentioned in his last post). The user can then customize the processing behavior of the Server to do what they think works best for them.


Another work-around that we have sometimes told users to use multiple Servers each with capacity 1. Not a course for everyone, but for some problems that sort of modeling approach has worked out fine.


But that is somewhat of a long-winded explanation of why, although it may be thought of as a 'bug', we've taken a 'works as intended' stance thus far. Though I don't think taking another look at this topic sometime again is a bad idea. I totally understand how a user might naturally expect or want something different.

Link to comment
Share on other sites

×
×
  • Create New...