A packet switch is a component of a communication network that connects several sources of data packets with several destinations. The switch directs each packet to the correct destination, processing packets as they arrive. The switch must be able to manage bottleneck situations in which several data packets have the same arrival time and the same destination. Packet switches can use a variety of architectures and algorithms. This section describes how to construct one particular packet switch model.
In this example, the goal is to construct a switch that:
Connects three data sources to three destinations
Holds arriving packets in a buffer (that is, a queue) for each of the data sources
Randomly resolves contention if two or more simultaneous packets at the head of their respective queues share the same intended destination, with no bias to any particular source of packets
The next figure shows an overview of the block diagram.
The packet switch example models each packet
as an entity. The Time-Based Entity Generator block
creates entities. To implement exponentially distributed intergeneration
times between successive entities from each source, the block has
its Distribution parameter set to
Attached to each entity are these pieces of data, stored in attributes:
The source of the packet, an integer between 1 and 3
The destination of the packet, a random integer between 1 and 3
The length of the packet, a random integer between 6 and 10
Note: The entity does not actually carry a payload. This example models the transmission of data at a level of abstraction that includes timing and routing behaviors but is not concerned with the specific user data in each packet.
Copies of the Event-Based Random Number block produce the random destination and length data. The Set Attribute block attaches all the data to each entity. The Set Attribute block is configured so that the destination and length come from input signals, while the source number comes from a constant in the dialog box.
The packet generation processes for the different sources differ only in the initial seeds for the random number generators and the values for the source attribute.
The packet switch example uses one FIFO Queue block as a buffer following each data source's Set Attribute block.
The queue uses a FIFO queuing discipline, which does not take into account the destination of each packet. Note that such a model can suffer from "head-of-queue blocking," which occurs when a packet not at the head of the queue is forced to wait even when its destination is available, just because the packet at the head of the queue is aiming for an unavailable destination.
A core block in the packet switch example is the Output
Switch block. This block sorts arriving entities so that they
depart at the appropriate entity output port based on the entities'
This part of the example is similar to the model shown in Use an Attribute to Select an Output Port.
The packet switch model must enable entities to advance from three queues to the single entity input port of the Output Switch block. Candidate blocks are Input Switch and Path Combiner. The Path Combiner block is more appropriate because it processes entities as they arrive from any of the entity input ports, whereas the Input Switch block would restrict arrivals to a specific selected entity input port.
Contention among packets can occur from:
No blockage: Multiple packets from different sources with the same intended destination arrive simultaneously at an empty queue and immediately attempt to arrive at the path combiner.
Although the arrivals occur at the same simulation time value, the processing sequence depends on:
The priorities of the entity generation events. In this example, all Time-Based Entity Generator blocks share the same Generation event priority parameter value.
The Execution order parameter
in the model's Configuration Parameters dialog box. In this example,
the parameter is set to
As a result, when two packets are generated simultaneously, the sequence of generation events in this example is random.
End of blockage: Multiple packets with the same intended destination are at the head of their respective queues precisely when the Path Combiner block's entity output port changes from blocked to unblocked.
For example, suppose all of the queues have leading packets
destined for the first server, which is busy serving an earlier packet.
The Path Combiner block's entity output port is blocked.
When the server completes service on the earlier packet, the Path
Combiner block's entity output port becomes unblocked. At
that moment, the Path Combiner block notifies its entity
input ports of the status change, in a sequence determined by the Input
port precedence parameter. In this example, the parameter
is set to
Equiprobable. As a result, when
packets waiting at the head of their queues have the same intended
destination that changes from unavailable to available, the sequence
in which these packets are selected for advancement is random.
The packet switch example does not model the channel in detail. The channel's key purpose is to process one packet at a time, for a duration that depends on the length of the packet. During processing, other packets bound for the same destination must wait, which introduces resource contention into the simulation.
Each channel is modeled as a Single Server block
that delays each entity by an amount of time stored in the entity's
Each destination is modeled as an Entity Sink block.