How to create a multi-threaded, non-blocking machine?
I'm trying to create a machine that's multi-threaded and can process many (say hundreds or thousands) of entities for their fixed duration without blocking and running single-threaded. If up to 500 entities can be in the machine at any given time, but each one must stay in the machine for say 2 hours. Creating a single queue to feed 100's/1,000's of instances of the same single-threaded machine instance seems like a bad way to approach it.
Example: I'm thinking something along the lines of a continuous oven used for baking bread, etc. or anything that is a conveyor system loop. See this video to reference what I'm suggesting: https://www.youtube.com/watch?v=3UjUWfwWAC4&t=120
If the output speed is 500/hr but it takes 1 hour to traverse the continuous oven, then I don't think it's accurate to say any one item takes 1/500th of the time and use a regular machine with 500/hr. That might accurately model the output rate of the machine, but not the capacity (i.e. number of parts simultaneously held within) of the machine nor does it reflect the total time it takes an entity (loaf of bread) to pass through the system. Every entity takes the entire duration required to traverse the loop, but the machine (oven) can handle multiple items at the same time.
You are right, Machine object and generally servers are single threaded that allows them to have the flow of (simplistically) getEntity->process->endProcessingActions->signalReceiver->removeEntity. In ManPy there is a Conveyer object in /dream/simulation/Conveyer.py. This is not strongly maintained though. Reasons:
- Until now there has been no case study
- It is complex to handle such object in discrete event simulation. This is because movement is by default continuous. So the object really calculates when something important will happen (e.g. I move enough so I have space for next Entity) and produces and event internally. From experience it becomes quite complex. So the object exists in two unit-tests that means it is not broken, but still not fully maintained.
I think the solutions can be:
- to create parallel machines fed by the queue. It can be done in a loop, of course it will produce overhead in calculations or,
- do use the Conveyer, see how this works and report issues.
Status changed to closedToggle commit list