-------------------------------------------------------------------------------
Author/Contact: 
--------------- 
 branislav.kusy@vanderbilt.edu (Branislav Kusy, ISIS, Vanderbilt) 
 miklos.maroti@vanderbilt.edu (Miklos Maroti, ISIS, Vanderbilt)

-------------------------------------------------------------------------------
Conventions: 
------------ 
 TINYOS - the directory where you store the tinyos-1.x release 
 ISIS - the directory of vu tinyos structure i.e. TINYOS/contrib/vu

-------------------------------------------------------------------------------
DESCRIPTION: 
------------ 
 The TestTimeSync and TestTimeSyncSusp components verify the precision of our 
 multi-hop time synchronization (ISIS/tos/lib/TimeSync, ISIS/apps/TestTimeSync) 
 The  TestTimeSyncPoller is a dedicated beacon (reference broadcaster). Each 
 client  (TestTimeSync app) responds to the beacon's radio msgs by sending 
 useful data to a  base station (GenericBase or TOSBase). The base station 
 forwards the data to  the pc in the DiagMSG format. More information about the 
 DiagMSGs and how to  display them can be found in (TINYOS/tos/lib/DiagMsg). 

 We propose the following test scenario: 
 
 - one dedicated beacon (TestTimeSyncPollerC) is periodically broadcasting 
  TimeSyncPoll msgs. 
 - several clients (TestTimeSyncC) simply compose the TimeSyncC component that
  provides the time  sync algorithm with the TimeSyncDebuggerC component that 
  responds   to the  TimeSyncPoll msgs by reporting time sync related data to a
  base station. The format of this data is DiagMSG (see TINYOS/tos/lib/DiagMsg) 
 - one base station (GenericBase or TOSBase) connected to a PC. On the PC you 
  should run an  application that can decode DiagMSGs and print them out in 
  readable form. 

 For testing purposes, probably the he most important part of the reported data 
 is the global time of arrival of the beacon's message. The beacon radio msg 
 arrives to all clients at the same time instant, therefore the reported global 
 times of different clients should be the same, if time sync works properly. 

 TestTimeSyncSuspC periodically disables and enables the radio on each of the 
 motes. The precision of the time sync protocol should not be affected by much
 
 The provided TimeSyncDebugger component can also be used to monitor the time 
 synchronization in your application. Just wire and start the TimeSyncDebuggerC 
 component in your application. The same TestTimeSyncPollerC can be used to 
 request the synchronization information.

-------------------------------------------------------------------------------
REPORTED DATA: 
--------------
 Each diagnose message that is sent back to the base station contains the 
 following fields:

 - the node ID of the mote that is sending this report (uint16_t) 
 - the 	sequence number of the polling message that is increased by the poller 
   for each new polling msg (uint16_t) 
 - the global time when the polling message is arrived (uint32_t) 
 - the local time of the mote when the 	polling message is arrived (uint32_t) 
 - the skew (the speed ratio between the clocks of the root of the network  and 
   the receiving node). Note, this value is normalized to 0, so 0 means  that 
   the two clocks run with the same speed. (float) 
 - the boolean value saying whether the node is synchronized or not. If a node 
   is not synchronized, the global time is not to be considered valid (uint8_t)
 - the id of the root of the time sync multi-hop algorithm. (uint16_t) 
 - the sequence number of the last time synchronizataion msg received from the 
   current root (uint8_t) 
 - the number of entries currently stored in the linear regression table 
   (uint8_t)

-------------------------------------------------------------------------------
TUNABLE PARAMETERS:
-------------------
(also see ISIS/tos/lib/TimeSync/TimeSync.txt)

 TIMESYNC_RATE (seconds) - how often will each node transmit the time sync msg 
 TIMESYNC_SYSTIME - if defined, the faster CPU (7MHz) clock is used, 
 otherwise 32k clock is used 
 TIMESYNC_DEBUG	- if defined, the multi-hop network is enforced by software 
 TIMESYNC_POLLER_RATE - how often will the poller send the beacon message 

-------------------------------------------------------------------------------
STEP BY STEP GUIDE TO RUN OUR TEST SCENARIO:
--------------------------------------------

1. upload 1 node with TINYOS/apps/TOSBase application(node id is not important) 
2. upload 1 node with ISIS/apps/TestTimeSync/TestTimeSyncPollerC application 
(modify Makefile, or simply: COMPONENT=TestTimeSyncPollerC make mica2 install 
node id is again not important) 3. upload 64 nodes with 
ISIS/apps/TestTimeSync/TestTimeSyncC application (modify Makefile, or simply: 
COMPONENT=TestTimeSyncC make mica2 install.x where x should be: a) one of the 
following: 0x5ij; i,j = {0,1,...,7} (64 nodes), this forms 8 by 8 grid with the
maximum hop distance of 7 hops b) the same as a), except 0x544 should be 
replaced by 0x444 - this was used in our test scenario; here 0x444 becomes the 
root of the network, it is in the middle of the network-it makes it harder for 
the network to elect a new root, if we switch off 0x444 4. place all 64 nodes 
within the radio range of TOSBase and TestTimeSyncPoller nodes and switch on 
the nodes, the base station should start receiving time sync messages from each
of 64 nodes with TIMESYNC_RATE period, time sync poller messages with 
POLLER_INTERRUPT_RATE period and DiagMSGs which are responds to the poller 
messages 5. run some of the java applications to decode incoming DiagMSGs 
(TINYOS/tos/lib/DiagMsg), for example: java net.tinyos.tools.PrintDiagMsgs

-------------------------------------------------------------------------------
EVALUATION for the 32.768 KHz Clock (ClockTimeStamping):
--------------------------------------------------------- 

In our test scenario we used 56 MICA2 motes arranged in a 7x8 grid. Each(inner)
mote could talk to its 8 immediate neighbors. This topology was enforced in 
software. Each mote was sending one time sync message per 30 seconds. 

- The nodes synchronized in 12 minutes. 8 hops per 30 seconds means that it 
takes 4 minutes to get information from one end of the network to the other. 
All motes were either synchronized or be able to tell that they are not yet 
synchronized in the first 6 minutes. 

- The maximum global time error between any two nodes of the network was less 
than 240 microseconds during an hour run. 

- When we killed the root (we knew which one), the maximum error jumped to 450 
microseconds, but got back to the previous 240 microseconds level in 6 minutes. 

- When we randomly turned off and on motes, one by one, the maximum error was 
not affected. The entering motes were synchronized in 1-2 minutes.

- When we turned of every other mote (not including the root), the maximum 
error was not affected. When we turned these back on, the error did not 
increase, and all these motes got synchronized in 4 minutes.

-------------------------------------------------------------------------------
EVALUATION for the CPU clock (SysTimeStamping):
------------------------------------------------ 
- a similar setup yields the average error of 1.2 us per hop and the maximum 
  error of 6 microsec per hop.

for more evaluation info, see our technical report at
 
 https://www.isis.vanderbilt.edu/projects/nest/documentation/Vanderbilt_NEST_TimeSynch.pdf

