LATENCY IN Cooperative Communication

1 view (last 30 days)
Syed Asad
Syed Asad on 4 Aug 2012
I want to simulate & calculate delay in n-nodes network in which nodes are randomly distributed but for simplicity their distance are known as shown in fig below.
I want to calculate the time (Latency) in this whole scenario. i.e Propagation delay as well as relaying delay in case of Cooperation be a nearby node.
calculating propagation delay is easy i.e (Distance = Velocity * time) which i did But how should i calculate Processing time and transmission time and more importantly Relaying time.
Im dealing With Cooperative communication which is Proposed for LTE system.
Some Explaination of Figure Fig shows a Cell of radius 16km , intermediate circles(different colors) shows specific distance from BS for example 2km & 4km etc. In center we have BS, Cell consists of n nodes (Users). Black arrow shows direct path Signal (Not Necessarily LOS), Blue Arrow shows Multipath Signal while Purple arrow shows Relayed Signal.
PLEASE HELP ME.

Answers (1)

Walter Roberson
Walter Roberson on 5 Aug 2012
Processing time and relaying time are not things that can be calculated: they have to be measured instead. You need to measure them under ideal conditions (nothing is waiting to be relayed), and under typical conditions, and under heavily-loaded conditions.
In the case of relaying, you have to decide how many antenna can be simultaneously in use: if you only have a single antenna and it cannot transmit and receive at the same time, then you have to receive and store the entire packet and cannot start to transmit it outwards until the receiving is finished, whereas if you have access to multiple antenna and channels or frequencies that do not interfere significantly with the incoming data, then you can start forwarding the packet as soon as you decode enough of it to figure out where it needs to go. These two different approaches are known as "store and forward" (receive all, then transmit), and "cut-through" (start transmitting as soon as you can decide where to transmit.) The choice of approach can depend on the standards that are being followed: some standards require that you calculate the CRC or other data validity checks and specify that you "MUST NOT" forward a packet which fails those checks; the "cut-through" approach would go ahead and forward a corrupted packet (because it doesn't know it is corrupted until after it has already started transmitting it.)
There are some circumstances under which you can calculate processing time, but you need to know a lot about the hardware and software you are using. This is best left for experienced low-level programming (for which you would not be using MATLAB.)
  4 Comments
Walter Roberson
Walter Roberson on 6 Aug 2012
In a store and forward network, there are several factors to consider.
  • the time lag between the first and last bits as needed to receive the packet. This depends upon the true length of the packet (including all headers and error correction bits), and upon the baud rate (symbols per second) and upon the number of simultaneously transmitted symbols. There are a number of different encoding schemes in common use, so one cannot simplify this as "number of bits in the packet". For example, in a spread spectrum approach, all of the bits of the packet might be transmitted simultaneously (on different frequencies), so the time might simply be the time of the longest bit hold. This time applies at each receiving device, and is a consequence of the transmission scheme only. It is hardly even worth mentioning, as it is just a part of the normal transmission time; I list it here only to explicitly distinguish it from the next item.
  • One the bits have been physically received, they have to be conveyed through what-ever internal devices to reach the point where they are potentially accessible to software. This delay is hardware (and firmware) specific and cannot be calculated without very thorough knowledge of the equipment
  • There can then be a delay before the software is ready to examine the bits. This could, for example, be due to the hardware having to issue a processor "interrupt", resulting in a software context switch before the software gets control. This delay is hardware specific, and can also be dependent on what the processor is doing at the time, but if a stock processor or DSP is being used (rather than a custom chip) then the minimum and maximum interrupt times can usually be found in the documentation (some devices will list the time in terms of processor cycles, but other devices will list elapsed times independent of the processor clock rate.)
  • The software then gets control and deals with the packet, making decisions about it, and either punting it up to the local networking stack, or queuing it to be output. This time is implementation dependent, both software and hardware, and can vary greatly depending upon the quality of the code. Devices that are designed primarily for relaying packets might not have any processing level "above" this for packets that do get relayed, and in that case would "return" from the interrupt at this point. General purpose devices such as iPhone that have been programmed to relay packets, will usually need to pass most packets up the line to networking code. The time to enter a packet into the network stack is software and hardware dependent and can vary a fair bit depending on the quality of the code. After the packet had (if necessary) been queued into the network code, the interrupt routine would "return" from the interrupt.
  • All of the time until the hardware interrupt returned would definitely be counted as "receiving time".
  • When a networking layer is being used, a "software interrupt" will often need to be generated, with the packet software layer getting its turn when everything "more important" was done. The time required will depend upon the hardware and upon what else is happening (e.g., did the user just tap to fire at an incoming Space Alien?) Depending on one's definitions, all the time until the packet software layer gained control might be counted as "receiving time".
  • Then there is time at the packet software layer to make a decision about the packet and route it. If a packet software layer is being used at all, then very likely the packet would have to be queued through the networking code (which takes time.)
  • At this point the packet is queued to go out, whether that happened through a packet software layer or happened through direct queuing at the hardware interrupt (for dedicated relay devices.) Depending on what else is waiting to go out, the packet could be queued for varying lengths of time.
  • Finally the packet would have to be transmitted, the time for which depends upon the encoding scheme, the frequencies involved, and so on.
  • The entire time from when the first bit (or all bits simultaneously) were received until the first bit (or all bits) are transmitted, is counted in "relaying time" -- the net signal delay involved in the relaying process. As there are several steps that are strongly hardware or software timing dependent, and the delays may depend upon what else the relaying device is doing, it is impossible to pre-calculate the relaying time. One might, however, be able to measure the minimum relaying time.
Syed Asad
Syed Asad on 6 Aug 2012
thanks a lot man... you make several thinks clear to me

Sign in to comment.

Categories

Find more on Transmit and Capture in Help Center and File Exchange

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!