Thanks Adam for the reply. I know a fair amount about Processes; I just needed a clarification about some aspects related to large models created using tables. The example I mentioned in the original post may be simple but it could easily scale to include 100's of locations. I will give some examples of these aspects that get tricky in such large models below.
Thanks Glenn for the clarification. The tip about finding where processes are being referenced in a model is great!
The MultiEchelonSupplyChain example may look simple, but it has the potential to be complex. When I asked the question what I had in mind is a similar model but with 100s of locations like DistributionCenter and Retailer (Basic Nodes).
Imagine you want 300+ basic nodes (like in my case) to have the same add-on process. Imagine having 300+ inventory elements that you want to write statistics for using a Write step. It's tedious to do things manually.
As for referencing the same add-on process for 100s of objects (basic nodes in my case), I found a trick, which is to add the add-on process to one of the objects and then sub-class it so that the process becomes a default for the newly sub-classed object. Then change all the other objects to the new sub-classed object.
As for writing statistics for 300+ inventory elements, I couldn't find a way to automate the process. To be clear, I want (for each inventory element) to write statistics at various points throughout the simulation run. The standard reported stats for inventory elements are only "Totals" or "Averages" or a function of the two. What I want is to see how a statistic changes over time, so I need to write to a CSV file using a Write step.
When working with large models, there's a need for automating some modeling aspects. Creating objects and elements can be easily automated using tables. It's dealing with these created objects and elements that sometimes needs tricks and workarounds to automate.