Jump to content

gdrake

Members
  • Posts

    88
  • Joined

  • Last visited

  • Days Won

    1

Everything posted by gdrake

  1. To answer your question specifically with regards to the MultiEchelonSupplyChain example, the OnReplenishOrder process that you see in the process logic of the data table driven example is automatically executed by the Inventory elements when the inventory detects the need to replenish. Go to Definitions -> Elements and click on the Inventory elements and you will see where that process is being referenced. The ShippingReceivingLogic process is being executed when an entity enters any of the BasicNodes placed in the Facility View. For example, click on the DistributionCenter node and in the Add-On Processes -> Entered property you will see the name of that process befiling referenced. The TryFulfillOrder process is executed by the OnCustomerOrder process using an Execute step, which is in term triggered by a customer order arrival. Note that If you are trying to figure out where the name of a process is being referenced, you might go to Project Home in the ribbon UI and go to the Search window. And then search for the name of the process in the model, and it will show you the locations where it is being referenced.
  2. Add a 'NumberArrivals' data column to the data table that is defining the arrival schedule. For example, define your arrival schedule table like this: SomeArrivalTableName ArrivalTime NumberArrivals 8:00 am 5 8:30 am 5 9:00 am 10 9:30 am 8 etc. Then on the Source object, specify the Arrival Mode as 'Arrival Table'. Specify the Arrival Time Property as 'SomeArrivalTableName.ArrivalTime'. Specify the Arrival Events Per Time Slot as 'SomeArrivalTableName.NumberArrivals'. Note that you can also easily add columns in the data table as well for the Arrival Time Deviation and No-Show Probabilities, and then map those columns to the corresponding properties in the Source. For example, if the Arrival No-Show Probability actually differs depending on the day of week or period in the day.
  3. You can dynamically assign the 'HomeNode' state variable of a vehicle or worker object (e.g., ASSIGN: Vehicle1[1].HomeNode = BasicNode1 or ASSIGN: Vehicle.HomeNode = BasicNode1 if the vehicle of interest is associated with the token executing the Assign step). The 'Initial Node (Home)' property is simply the initial value assigned to that node reference state variable of the vehicle or worker. I'll have the property description of that 'Initial Node' property updated to make sure to note the state variable that can be dynamically assigned to change the home node location.
  4. Adam, just a note that for a task sequence, based on your feedback on the Forum we have prioritized an enhancement that will allow you to use either the sequence numbering scheme that we currently support or an 'Immediate Predecessors' field to define task precedence. For your network diagrams, seems like you will be more comfortable going with the Immediate Predecessors field approach and for each task just listing out the predecessor numbers.
  5. Go to the entity instance placed in the model and check the Population -> Initial Number In System property. Sounds like that property probably has a value of '1'. So that 1 entity is being initially created and then your Create step is creating 5 more.
  6. The Operation & Activity constructs are only supported for entities located inside a fixed processing location such as an object modeling a 'Workstation'. While an entity is traveling on a TimePath, you can execute some process (using an Execute step) or a task sequence (using Task Sequences and StartTasks step).
  7. Adam, yep, the sequence numbers that Glen posted above will do that second flow diagram that you mentioned. Task1 = 10 'Task1' must be finished first before any other task may be started because its primary number of '10' is smaller than any other primary number in the sequence. Task2 = 20.1.1 Task3 = 20.1.2 Task4 = 20.2 Once 'Task1' is finished, you can start all of the tasks with primary numbers of '20' in parallel. Task5 = 30.1.1 You can start 'Task5' once 'Task2' is finished, because it is a higher operation number of '30' that is on the same exact task substream. In other words you can do a 30.1.1 after 20.1.1 is finished. Task6 = 30.1 You can start 'Task6' once 'Task2' and 'Task3' are finished. Because then you will have finished all the tasks with sequence numbers starting with '20.1' (finished all tasks with lower primary numbers that are part of the 'XX.1' substream including any nested substreams). Task7 = 40 This task has the highest primary number so can be started once all other tasks have finished. ---- As I said in my first post, the task sequencing number scheme that we provide is very flexible and should be able to handle any flow diagram that you come up with. And the approach is particularly friendly if tasks are being defined using a table data approach. Some time down the road, hopefully we'll get a chance to provide a visual flow chart editor that just auto-enters task sequence numbers in for you. Until then, once you get the hang of how the numbering scheme works with a few examples, hopefully not too difficult to pick up. I think the basic rules of the task sequence numbering scheme are also described in the help documentation.
  8. Here is a sequence numbering scheme to do the above task flow chart that you pasted. the task sequence numbering scheme that we support is very flexible and very friendly for defining a task sequence in a data table, and you can do as many nested task substreams as you like, but I agree that it can take a moment for how the numbering scheme works to click. At some point, agree it will be nice if we can provide a visual drawing tool to draw the sequence and the sequence numbers are entered automatically. But hopefully once you understand how to number your first flow diagram above, then won't be too difficult to do any of your other task flow charts that you have. Task1 = 10 Task2 = 20.1 Task3 = 20.2 Task4 = 20.3 The above says do Task1 first, then do Task2 (task substream '1'), Task3 (task substream '2'), and Task4 (task substream '3') in parallel. Task5 = 30.1.1 Task6 = 30.1.2 That says to do Task5 and Task6 when Task2 is finished. Basically, to do a nested task substream, a task numbered 30.1.2 is saying 'nested task substream '2' of task substream '1'). Task7 = 40 (you won't do this task until all tasks are finished - is the last task)
  9. 1) Add a state variable to you model that is the switch control variable. For example, add an integer state variable named 'MySwitchControlVar'. 2) Specify the variable that you created as the Switch Control Variable on the Flow Node. 3) On each of the possible outbound links that can be chosen, in the Selection Weight expression for the link enter an expression like 'MySwitchControlVar==1' which would say for that particular outbound link, send flow to that link if the switch control variable has a value of '1'. 4) Add your own state assignment logic to assign that switch control variable a new value whenever you want to 'switch over' to sending flow to a different outbound link from the flow node. There may be a SimBit (a small example) installed with Simio that illustrates the approach. The description text of the Switch Control Variable property also provides some of the same information that I provided above.
  10. Mark, Simio internally tracks the number of active suspensions applied to a process, or to the movement of an object or a flow regulator...and you do see the new value of that counter included in the trace window information whenever a new suspension is added or a suspension is cleared, but that counter is not currently exposed as a function on the affected object or element. I've noted your request.
  11. Mark, just a note that the FlowToItemConverter has 'Purge Contents Triggers', 'Clean-In-Place Triggers', and several add-on process triggers as well.
  12. You might build a model like this: Source -> ItemToFlowConverter -> the flow line Where the Source object is the Standard Library Source for creating discrete entity arrivals. So, use a Source to create discrete entities (of possibly varying entity types) from table data. Then have each discrete entity go into an ItemToFlowConverter to be converted into a specified quantity of flow, by specifying the 'Flow Quantity Per Item' property however you like (sounds like that will possibly be a value also coming from table data that was used to define the entity's type). The FlowSource is currently easiest to use as an infinite flow supply of some flow type. You can of course open and close the 'Output' flow regulator (e.g., Assign Output@MyFlowSourceName.FlowRegulator.CurrentMaximumFlowRate = 0 to close the output valve of the flow source). However, if you have a situation with discrete arrivals of flow quantities where each arrival has some entity type and some volume or weight quantity, I think using a Source -> ItemToFlowConverter combination is probably the best way to go.
  13. Multi-capacity preemption with the Standard Library Server is a tricky topic. When using capacity schedules, the Server as currently designed works most naturally if the on-shift capacity is a constant. For example, the capacity goes from 1 to 0 and then back to 1, or from 10 to 0 and then back to 10 and so forth. When the Server's capacity goes to 0, it goes into an 'Offshift' state and the processing logic of all entities that have been allocated Server capacity and are located in the Server's 'Processing' station get suspended. That seems fine, though we could also add an option at some point which allows any current entities to finish processing while the Server is in an 'OffShiftProcessing' state (i.e., the Server works overtime to finish any current WIP), but we have not put that sort of behavior option in yet. But that is certainly doable and has been an idea considered before. When the Server goes back into the on-shift 'Processing' state (which means it is processing at least one entity), then all entities in the Server's 'Processing' station resume their processing delay times. The Server comes back on-shift with a capacity less than the number of entities already in-process The current behavior is as mentioned above. The Server just simply resumes all processing. We've discussed before in previous years trying to do something more fancy here, but trying to only partially resume processing would be much more complicated logic. Let's say 10 entities are processing but the capacity is only 1. Which of the WIP entities is the lucky one that is selected to resume processing? Then the 9 entities that are not the chosen one, presumably they would have to be Interrupted and then release the Server capacity that they hold? Because they then have to wait to re-Seize the Server capacity until the single entity finishes processing and releases capacity (thus allowing the next entity to re-seize)? But those 9 entities may be expected to wait in the Server's Processing station? And if so, then you might have to somehow make sure that no new entities who have yet to ever start processing (e.g., entities waiting in the Server's Input Buffer or waiting outside the Server at its 'Input' node if there is no input buffer) can seize the server before the interrupted guys? So you may have to put in some layered allocation rule scheme whereby new entities waiting in the input buffer are a lower priority to seize Server capacity than entities already in the Processing station waiting to re-seize Server capacity)? Or maybe you just Interrupt everybody and stick them in the Input Buffer of the Server and let the specified ranking rule/dynamic selection rule specified on the Server sort them all out? And if it turns out that the next entity who gets the Server capacity was not even WIP on the server when it went off-shift but was a guy who arrived during the off-shift period, then so be it. And so forth. It can be a bit complicated. We've always punted in the past on this topic because of the issues involved, though one of the reasons that we did add the Interrupt step was to give users a chance to customize a Server if they needed to go down this sort of road (as Dave Sturrock mentioned in his last post). The user can then customize the processing behavior of the Server to do what they think works best for them. Another work-around that we have sometimes told users to use multiple Servers each with capacity 1. Not a course for everyone, but for some problems that sort of modeling approach has worked out fine. But that is somewhat of a long-winded explanation of why, although it may be thought of as a 'bug', we've taken a 'works as intended' stance thus far. Though I don't think taking another look at this topic sometime again is a bad idea. I totally understand how a user might naturally expect or want something different.
  14. Regarding #3, it is correct that by default when a Server goes from off-shift to on-shift that all entities which have already been allocated Server capacity will simply continue processing (even if the scheduled capacity is less than the entities already using the Server. The server's scheduled utilization will be greater than 100 during the time period when it is working over scheduled capacity). Similarly, suppose you have a Server with scheduled capacity of 5 that has 5 entities currently processing on the Server, and the scheduled capacity is decreased from 5 to 4, that capacity decrease does not by default suspend the processing of one of the entities on the server. All 5 entities continue processing and the server's scheduled utilization will be greater than 100 during the time period until at least one of the entities finishes and releases the server. If it is important for you in a model to never have a server being utilized above scheduled capacity while on-shift, then you might add some Interrupt step-related process logic to your model that essentially kicks entities off the server whenever capacity is decreased. The interruption logic will make the entities release the server capacity, store the remaining processing time, and transfer the entities from the processing station back into the input buffer to have to re-seize. You would have control over selection of which entities are desired to get kicked off and of course if having to re-seize the Server then the server's allocation ranking and selection rules would be applicable. The SimBit 'InterruptingServerWithMultipleCapacity.spfx' might be looked at to see an example of interrupting entities on a server, saving the remaining processing time, and then transferring the entities back into the Input Buffer of the Server to have to re-seize capacity in order to continue processing.
  15. In that new model that you attached, in the 'GoToStation' process logic and the Transfer step, of course you will need to specify the exact entity 'B' whose station you are trying to transfer into. To do that, you may have to set some kind of variable pointing to that B entity. But you will need to use absolute reference to it. Now in this very simple model, I can cheat because I know that there is only one 'B' entity in the system. So if you, for example, specify the Station Name property in that Transfer step as 'B1[1].estacao' then you will see the model work just fine. That is saying to transfer into the entity in the population of 'B1' type entities at population member index 1. Of course there is only one of those guys in the system so sure that is the right guy. However, if you had multiple entities of type 'B1' in the system, then you would want to of course reference the specific B1 type entity sitting at that node presumably to be matched up with the A entity. In that case, you'll need to setup some kind of reference to get to the right 'B1' type entity. Whether that is setting a variable reference, or putting that B1 entity into a storage that the A entity can then search or whatever.
  16. Note that I made the two changes that I described above and that simple model seemed to work fine. The entity transferred off the entity into TransferNode2 and each entity then went to their respective Sinks. Seemed like it worked. Definitely no runtime errors.
  17. Put a Decide step in front of that Transfer step per my recommendation #1 above.
  18. Taking a quick look at that model, you have a couple of issues but they are easily fixed. 1) In that 'TransferNode2_Entered' process, put a Decide step in front of the Transfer step to only execute the Transfer step if 'Entity.Is.B' is True. Since you will be essentially dropping off the A entity at the same node, when the A entity enters the node you don't want it to being doing this Transfer step logic. 2) In the Transfer step itself, you want to transfer the 'A' entity that the 'B' entity is carrying in that station location. So, on the Transfer step, specify the 'Entity Type' property as 'SpecificObject' (in Advanced Options). Then specify the 'Entity Object' to be transferred as 'B.estacao.Contents.FirstItem.Entity'. That will tell the Transfer step to simply transfer the first entity item in the 'estacao' station of the 'B' entity that has entered the node and is executing this Transfer step.
  19. The Transfer step may be used to transfer an entity into or out of a station location. On the Transfer step, you use the Entity Type property (Advanced Options) to specify the entity object to be transferred. When transferring the entity into the station, you would specify the 'To' property as Station and then specify the station of the other entity that you want to transfer into (e.g., MyCustomEntity.SomeStationThatWasAdded). The 'From' property is whatever type of location the entity is physically coming from. Usually, within the logic of the entity that owns the station location, you will use an EndTransfer step in a process triggered by the station's 'Entered' event to have the entering entity free the station's transfer-in mechanism so that other entities might also then transfer into that station. This is important if your station is intended to hold more than a single entity concurrently. If you don't do that, then the first entity into the station is going to block other entities from entering that station until it transfers out. See the usage of the EndTransfer step in other standard library objects perhaps for example. When transferring the entity out of the station, then you would specify the 'From' property as CurrentStation. The 'To' property will be wherever the entity is physically transferring to. The above all should work. If you are getting errors, then you probably have a reference off or have otherwise misspecified the Transfer step properties.
  20. Apologize for being so slow to respond, but you should be able to: a) Add a station to an entity. b) use the Transfer step to transfer an entity into and out of the station. Make sure that you are trying to transfer into the station of the actual right entity. After an entity transfers into a station, make sure that you do an EndTransfer step to end the transfer activity so that another entity might be able to transfer into it. When you say that the transfer did not work, can you describe what you did and why it did not seem to work?
  21. Nadine, Note that if you have an entity type in the system named 'EntityType1', then you may use the following set of functions: EntitytType1.Population.TimeInSystem.Average EntitytType1.Population.TimeInSystem.Minimum EntitytType1.Population.TimeInSystem.Maximum To get the average/minimum/maximum time in system statistics for all entities of that type that were destroyed (or disposed) in the system. Regardless of where or how the entities were disposed of. Note that any entity as well can access statistics for the entire entity type population that it is a member of using 'Entity.Population.XXXXXXX' syntax. Thus, refer to the 'Population' function namespace on an entity type or individual entity and perhaps what you are looking for is located there.
  22. Mark, just an FYI that in Sprint 102, the following objects in the Flow Libary: Tank ItemToFlowConverter FlowToItemConverter Have all been enhanced to provide new 'Purge Contents Triggers' functionality. For the Tank object, the 'Purge Contents Triggers' feature will allow you to specify conditional event-driven triggers that will immediately remove and dispose of any contents held in the tank, putting the tank into an empty state. This will easily allow you for example to clear residual flow contents in a tank due to round-off errors, by having the tank perform a conditional check when some specified event occurs (e.g., each time a filled container exits a Filler that the Tank is supplying, then purge the tank if the remaining contents is smaller than some epsilon quantity). A 'Purge Contents' trigger might also be defined to purge/flush/clear the Tank for any other sort of reasons, to reset the flow line back to an empty state. For the ItemToFlowConverter object, the 'Purge Contents Triggers' feature will allow you to specify conditional event-driven triggers that will immediately remove and dispose of any generated flow waiting to exit the converter object, putting the converter's flow container into an empty state and cancelling any further outflow for the discrete item entity whose conversion was in-process. For the FlowToItemConverter object, the 'Purge Contents Triggers' feature will allow you to specify conditional event-driven triggers that will immediately remove and dispose of any inflow collected by the converter object for creating the next discrete item entity, putting the converter's flow container back into an empty state. In addition to the above features, based on some of the discussion that was going on in this Forum thread regarding the Filler (or Emptier) being more flexible dealing with possible round-off calculations, a new 'Stop Early Event Name' property may be found on those two objects in Sprint 102. The 'Stop Early Event' feature on the Filler & Emptier objects will allow you to define an optional event that will end the filling or emptying operation early (before reaching the desired fill or empty target) if the specified event occurs. If, for example, you have a Tank supplying a Filler, you might specify that the Filler should stop filling the current container entity if the container becomes full OR if the tank becomes empty, whichever event occurs first. This type of logic would ensure that the container entity is always going to finish the filling operation and exit the filler, even if say there is a scenario where the container entity's capacity was 0.2 but the contents in the tank was 0.199999, the tank would go empty and the filling operation would be considered completed. Thanks again for the feedback on this thread. The input on this forum is appreciated, and we often use information here as input into possible new enhancements and design evaluations.
  23. Mark: Specifically for the Flow Library Filler & Emptier objects, what I will probably do is add something like a 'Stop Early Event' option, where you can optionally specify to stop the filling or emptying operation early if some specified event occurs. Then in the Filler for example, you could specify to fill the container entity until full or until the source tank is empty (i..e, the tank's 'FlowContainer.Empty' event occurs). Whichever of those events happens first. In your sort of model, that kind of simple approach would guarantees the container is 'filled' and exits the Filler regardless of round-off error. If your fill target is 2.0 and there is only 1.99999999 in the source tank, then the tanks goes empty first and the filler stops. If there is 2.000000000000001 in the source tank, then the container entity full event happens first and the filler stops. Either way, the container entity is exiting the filler and away it goes. Now, in the above latter case where there was 2.00000000000001 in the source tank, of course then you might have some tiny residual left in the tank after the filler is done. In that case, what I have been considering is adding an 'Auto Destroy Contents Mode' to the Flow Library Tank, which allows you to auto-destroy the tank's contents if some specified event occurs and perhaps a condition is also true. Thus, for example, you might want to auto-destroy a tank's contents if the inflow entity type is changing and new product is entering the tank (to 'clean' the tank). In this filler case, when the container entity exits the Filler, you might want to have the Tank be notified of that event and then check if there is some tiny amount still in the tank and if so then just automatically destroy it. If I put in features like above, I think you'd be able to model your flow transfer situations using the Flow Library Tank and Flow Library Filler for example without having to worry about any EPSILON tricks to account for round-off error. And the above are generally the type of features that I lean towards. Getting into Simio's flow engine itself artificially adjusting flow quantities by small epsilon amounts to try to deal with possible round-off errors doesn't feel so good.
  24. Mark, when you replied to 'Glen' did you mean me? We have a 'Glen Wirth' and then me (Glenn Drake!). Small company but they have to hire two Glen(n)s just to occasionally confuse us ha ha. Totally separate note, today I looked at that possible issue with using Filler and the Emptier objects in the Flow Library to possibly deal with floating point round off error issues automatically, and tried a few things, but found nothing that felt really good to actually put in there. Simple example, if you have a Tank with 1.0 cubic meters, and then an attached Filler object removes flow from that tank in 0.2 cubic meter increments...then what happens on my machine is 1.0 - 0.2 - 0.2 - 0.2 - 0.2 - 0.2 = 1e-16 (more or less). Now, after that fifth container entity got filled, you might expect the Tank to be at 0.0, but instead that 1e-16 is still in there. If you then actually have those 5 filled container entities, in the same model, go and empty their contents into a second Tank using an Emptier...then on my machine I see 0.2 + 0.2 + 0.2 + 0.2 + 0.2 = 1.0000000000000009 (or something like that) added to the Tank. Again floating point math error. Adding up 0.2 five times does not equal 1.0. This kind of stuff is incredibly difficult to try to deal with automatically. Generally, there is a rule in trying to clean up round-off errors in general purpose code that if 'You don't know what you don't know' then to tread carefully. I could start trying to put in EPSILON tricks but that is invariably like throwing darts at a dartboard. Not only guessing what a 'good' epsilon value might be but also the fact that round-off errors might be a little bit less or a little more and in a variety of situations (as illustrated in my simple example above). Usually, I try to avoid like the plague getting into EPSILON stuff. I'm always looking for cleaner ways to correct round-off (i.e., situations where the true value is very clear and thus the variable can be simply forced to that value without any kind of epsilon related checking). We can keep thinking about this, but for now, if you are doing flow transfers involving discrete quantities of fluid/mass that need to be 'exact', but round-off error calculations are sometimes throwing a wrinkle either into logic or into animation, then for now all that I can recommend is to put in extra process logic checks as convenient and necessary to deal with such things. For example, in my above example, after filling the 5th container then making sure any residual flow left in the source Tank is destroyed. Or, if possibly the flow quantity in the source tank is 0.999999999999999 when you really need it to be '1.0', because there was round-off error when the flow was added to the tank, then perhaps setting quantity variables accordingly to make sure the flow transfer is still successfully completed. I know this can be a bit painful depending on the situation. This continuous flow stuff can certainly be a bit of pain at times compared to simulation discrete items due to floating point arithmetic...we'll keep working hard on it and beating on it and gradually improving stuff when we can.
  25. The property design of the Flow Library Tank object, for user simplicity, assumes either a flow contents unit type of 'Volume' or 'Weight'. The tank's initial capacity, as well as the tank level mark locations, are then specified in either volume or weight units. The tank was designed this way under the assumption that normally a user has a tank capacity (and level marks) specified in either volume or weight. Not both and not with two distinct volume and weight capacity constraints at the same time. If an advanced user wants to model a Flow Library Tank object with both volume and weight capacity limits, that of course is still possible. Simply assign the TankName.FlowContainer.CurrentVolumeCapacity and/or TankName.FlowContainer.CurrentWeightCapacity state variables using an Assign step in process logic (perhaps in run initialization logic or can be dynamically during the run at any time). All the Flow Library Tank's properties are doing is on run initialization setting one of those capacity constraints to the specified value and then the other capacity constraint to 'Infinity'. But you can re-assign both capacity constraints to your own values in logic if you want. Using those provided state variables.
×
×
  • Create New...