<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>http://ovsa.njit.edu//wiki/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Mlukicheva</id>
	<title>EOVSA Wiki - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="http://ovsa.njit.edu//wiki/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Mlukicheva"/>
	<link rel="alternate" type="text/html" href="http://ovsa.njit.edu//wiki/index.php/Special:Contributions/Mlukicheva"/>
	<updated>2026-04-17T21:31:42Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.38.1</generator>
	<entry>
		<id>http://ovsa.njit.edu//wiki/index.php?title=2016_December&amp;diff=644</id>
		<title>2016 December</title>
		<link rel="alternate" type="text/html" href="http://ovsa.njit.edu//wiki/index.php?title=2016_December&amp;diff=644"/>
		<updated>2016-12-07T00:51:59Z</updated>

		<summary type="html">&lt;p&gt;Mlukicheva: /* Dec 07 */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Dec 01 ==&lt;br /&gt;
'''12:52 UT''' For the past several days we have done normal solar observing, with some interruptions due to testing new fast-correlator designs.  The latter is showing improvement, but still some issues to resolve.  During the solar observations, there have been a few small events.  One M1-class flare occurred, but had almost no radio emission apparently.  Activity seems to be subsiding now.&lt;br /&gt;
&lt;br /&gt;
== Dec 02 ==&lt;br /&gt;
'''15:00 UT''' We made an attempt last night to do pointing measurements on Ant 14, but there were several problems.  The tunable LO was not switching, but mainly Ant 14 was not in the subarray!  After this was fixed, the wind was high, so observations were terminated early.  None of the observations will be useful.  We will proceed with normal solar observing.&lt;br /&gt;
&lt;br /&gt;
== Dec 03 ==&lt;br /&gt;
'''12:04 UT''' Another attempt is underway to do pointing measurements.  Early in the observations, it was discovered that the GPS clock had lost connection, which caused many subsystems to get confused.  By about 01:30 UT the system had been brought back into service, and data-taking had started.  Ant 12 is missing, because of a possibly spurious ''Lo Hard Limit'' in Azimuth (the current Azimuth is reading high, not low...). Also, overnight there have been many episodes of high wind, so many of the scans will be useless.  Right now I count 9 potentially good scans.  I also found the tunable LO saying ''Queue Overrun'', so tuning on band14 may not have been correct for awhile--not sure when this occurred.&lt;br /&gt;
&lt;br /&gt;
== Dec 04 ==&lt;br /&gt;
'''12:22 UT''' The wind was bad '''all day''' yesterday without let-up, hence there were virtually '''no''' good scans the entire day--maybe about 9 total.  I restarted the pointing schedule last night, for a third night of attempts to get good results, and the wind was good, '''but''' this morning I found three anomalies: Ant 14 Dec drive was stuck with a permit, Ant 14 VATTN was set at 5 dB (maybe not a killer), and Ant 14 DCMATTN was 0 (definitely a killer).  Also DCMAUTO was not off, which is very strange.  I have no idea how these settings get corrupted.  It may be that none of these data up to now are any good.  The scan starting 12:39 UT may be the first with any chance.&lt;br /&gt;
&lt;br /&gt;
'''14:30 UT''' The Ant 14 Dec drive spiked again, so it only successfully did one source.  I have reset it, but there appears to be some generic problem with the dish that results in Dec motor-current spikes.&lt;br /&gt;
&lt;br /&gt;
'''17:30 UT''' I checked again, and again the Ant 14 drive is stopped.  I reset it, but I do not know if we are getting any good data.  Perhaps I have to babysit it on each source change.&lt;br /&gt;
&lt;br /&gt;
'''21:51 UT''' For what it is worth, the observations from 17:30 UT to now should be good, with 4 more pointings to go.&lt;br /&gt;
&lt;br /&gt;
== Dec 05 ==&lt;br /&gt;
'''23:30 UT''' No observations at all today, because of testing of the fast correlator.  The packet headers look better now, so we are getting close, but there are some issues with both number of X packets and the content of the packets, suggesting an issue with the FFT block.&lt;br /&gt;
&lt;br /&gt;
== Dec 06 ==&lt;br /&gt;
'''17:20 UT''' Normal solar observing has started.  Kjell tells me that the air conditioning stopped again, but he is cooling the room manually.&lt;br /&gt;
&lt;br /&gt;
'''18:34 UT''' It seems that none of the pointing observations we attempted over the weekend are any good, even though at least some were certainly supposed to be tracking, with correct settings.  As a test, I am grabbing a quick observation of 3C273--on source by 18:36 UT.  Okay, I just checked, and the data on 3C273 is just fine.  I cannot imagine what went wrong over the weekend.  I guess we have to make another attempt to do pointing, overnight.&lt;br /&gt;
&lt;br /&gt;
'''18:50 UT''' Back to solar observing.&lt;br /&gt;
&lt;br /&gt;
'''23:41 UT''' Start of 24-hour pointing measurements. Ant12 &amp;quot;to stow&amp;quot;. Wind 10 mph.&lt;br /&gt;
&lt;br /&gt;
== Dec 07 ==&lt;br /&gt;
'''00:00 UT''' Continue 4-hour pointing measurements.&lt;br /&gt;
&lt;br /&gt;
'''00:04 UT''' Strong wind 21 mph.&lt;br /&gt;
&lt;br /&gt;
'''00:44 UT''' Front end temperatures for Ant3 and Ant14 are red.&lt;/div&gt;</summary>
		<author><name>Mlukicheva</name></author>
	</entry>
	<entry>
		<id>http://ovsa.njit.edu//wiki/index.php?title=2016_December&amp;diff=643</id>
		<title>2016 December</title>
		<link rel="alternate" type="text/html" href="http://ovsa.njit.edu//wiki/index.php?title=2016_December&amp;diff=643"/>
		<updated>2016-12-07T00:49:13Z</updated>

		<summary type="html">&lt;p&gt;Mlukicheva: /* Dec 07 */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Dec 01 ==&lt;br /&gt;
'''12:52 UT''' For the past several days we have done normal solar observing, with some interruptions due to testing new fast-correlator designs.  The latter is showing improvement, but still some issues to resolve.  During the solar observations, there have been a few small events.  One M1-class flare occurred, but had almost no radio emission apparently.  Activity seems to be subsiding now.&lt;br /&gt;
&lt;br /&gt;
== Dec 02 ==&lt;br /&gt;
'''15:00 UT''' We made an attempt last night to do pointing measurements on Ant 14, but there were several problems.  The tunable LO was not switching, but mainly Ant 14 was not in the subarray!  After this was fixed, the wind was high, so observations were terminated early.  None of the observations will be useful.  We will proceed with normal solar observing.&lt;br /&gt;
&lt;br /&gt;
== Dec 03 ==&lt;br /&gt;
'''12:04 UT''' Another attempt is underway to do pointing measurements.  Early in the observations, it was discovered that the GPS clock had lost connection, which caused many subsystems to get confused.  By about 01:30 UT the system had been brought back into service, and data-taking had started.  Ant 12 is missing, because of a possibly spurious ''Lo Hard Limit'' in Azimuth (the current Azimuth is reading high, not low...). Also, overnight there have been many episodes of high wind, so many of the scans will be useless.  Right now I count 9 potentially good scans.  I also found the tunable LO saying ''Queue Overrun'', so tuning on band14 may not have been correct for awhile--not sure when this occurred.&lt;br /&gt;
&lt;br /&gt;
== Dec 04 ==&lt;br /&gt;
'''12:22 UT''' The wind was bad '''all day''' yesterday without let-up, hence there were virtually '''no''' good scans the entire day--maybe about 9 total.  I restarted the pointing schedule last night, for a third night of attempts to get good results, and the wind was good, '''but''' this morning I found three anomalies: Ant 14 Dec drive was stuck with a permit, Ant 14 VATTN was set at 5 dB (maybe not a killer), and Ant 14 DCMATTN was 0 (definitely a killer).  Also DCMAUTO was not off, which is very strange.  I have no idea how these settings get corrupted.  It may be that none of these data up to now are any good.  The scan starting 12:39 UT may be the first with any chance.&lt;br /&gt;
&lt;br /&gt;
'''14:30 UT''' The Ant 14 Dec drive spiked again, so it only successfully did one source.  I have reset it, but there appears to be some generic problem with the dish that results in Dec motor-current spikes.&lt;br /&gt;
&lt;br /&gt;
'''17:30 UT''' I checked again, and again the Ant 14 drive is stopped.  I reset it, but I do not know if we are getting any good data.  Perhaps I have to babysit it on each source change.&lt;br /&gt;
&lt;br /&gt;
'''21:51 UT''' For what it is worth, the observations from 17:30 UT to now should be good, with 4 more pointings to go.&lt;br /&gt;
&lt;br /&gt;
== Dec 05 ==&lt;br /&gt;
'''23:30 UT''' No observations at all today, because of testing of the fast correlator.  The packet headers look better now, so we are getting close, but there are some issues with both number of X packets and the content of the packets, suggesting an issue with the FFT block.&lt;br /&gt;
&lt;br /&gt;
== Dec 06 ==&lt;br /&gt;
'''17:20 UT''' Normal solar observing has started.  Kjell tells me that the air conditioning stopped again, but he is cooling the room manually.&lt;br /&gt;
&lt;br /&gt;
'''18:34 UT''' It seems that none of the pointing observations we attempted over the weekend are any good, even though at least some were certainly supposed to be tracking, with correct settings.  As a test, I am grabbing a quick observation of 3C273--on source by 18:36 UT.  Okay, I just checked, and the data on 3C273 is just fine.  I cannot imagine what went wrong over the weekend.  I guess we have to make another attempt to do pointing, overnight.&lt;br /&gt;
&lt;br /&gt;
'''18:50 UT''' Back to solar observing.&lt;br /&gt;
&lt;br /&gt;
'''23:41 UT''' Start of 24-hour pointing measurements. Ant12 &amp;quot;to stow&amp;quot;. Wind 10 mph.&lt;br /&gt;
&lt;br /&gt;
== Dec 07 ==&lt;br /&gt;
'''00:00 UT''' Continue 4-hour pointing measurements.&lt;br /&gt;
&lt;br /&gt;
'''00:04 UT''' Strong wind 21 mph.&lt;br /&gt;
&lt;br /&gt;
'''00:44 UT''' Front end Ant3 and Ant14 red.&lt;/div&gt;</summary>
		<author><name>Mlukicheva</name></author>
	</entry>
	<entry>
		<id>http://ovsa.njit.edu//wiki/index.php?title=2016_December&amp;diff=642</id>
		<title>2016 December</title>
		<link rel="alternate" type="text/html" href="http://ovsa.njit.edu//wiki/index.php?title=2016_December&amp;diff=642"/>
		<updated>2016-12-07T00:04:43Z</updated>

		<summary type="html">&lt;p&gt;Mlukicheva: /* Dec 07 */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Dec 01 ==&lt;br /&gt;
'''12:52 UT''' For the past several days we have done normal solar observing, with some interruptions due to testing new fast-correlator designs.  The latter is showing improvement, but still some issues to resolve.  During the solar observations, there have been a few small events.  One M1-class flare occurred, but had almost no radio emission apparently.  Activity seems to be subsiding now.&lt;br /&gt;
&lt;br /&gt;
== Dec 02 ==&lt;br /&gt;
'''15:00 UT''' We made an attempt last night to do pointing measurements on Ant 14, but there were several problems.  The tunable LO was not switching, but mainly Ant 14 was not in the subarray!  After this was fixed, the wind was high, so observations were terminated early.  None of the observations will be useful.  We will proceed with normal solar observing.&lt;br /&gt;
&lt;br /&gt;
== Dec 03 ==&lt;br /&gt;
'''12:04 UT''' Another attempt is underway to do pointing measurements.  Early in the observations, it was discovered that the GPS clock had lost connection, which caused many subsystems to get confused.  By about 01:30 UT the system had been brought back into service, and data-taking had started.  Ant 12 is missing, because of a possibly spurious ''Lo Hard Limit'' in Azimuth (the current Azimuth is reading high, not low...). Also, overnight there have been many episodes of high wind, so many of the scans will be useless.  Right now I count 9 potentially good scans.  I also found the tunable LO saying ''Queue Overrun'', so tuning on band14 may not have been correct for awhile--not sure when this occurred.&lt;br /&gt;
&lt;br /&gt;
== Dec 04 ==&lt;br /&gt;
'''12:22 UT''' The wind was bad '''all day''' yesterday without let-up, hence there were virtually '''no''' good scans the entire day--maybe about 9 total.  I restarted the pointing schedule last night, for a third night of attempts to get good results, and the wind was good, '''but''' this morning I found three anomalies: Ant 14 Dec drive was stuck with a permit, Ant 14 VATTN was set at 5 dB (maybe not a killer), and Ant 14 DCMATTN was 0 (definitely a killer).  Also DCMAUTO was not off, which is very strange.  I have no idea how these settings get corrupted.  It may be that none of these data up to now are any good.  The scan starting 12:39 UT may be the first with any chance.&lt;br /&gt;
&lt;br /&gt;
'''14:30 UT''' The Ant 14 Dec drive spiked again, so it only successfully did one source.  I have reset it, but there appears to be some generic problem with the dish that results in Dec motor-current spikes.&lt;br /&gt;
&lt;br /&gt;
'''17:30 UT''' I checked again, and again the Ant 14 drive is stopped.  I reset it, but I do not know if we are getting any good data.  Perhaps I have to babysit it on each source change.&lt;br /&gt;
&lt;br /&gt;
'''21:51 UT''' For what it is worth, the observations from 17:30 UT to now should be good, with 4 more pointings to go.&lt;br /&gt;
&lt;br /&gt;
== Dec 05 ==&lt;br /&gt;
'''23:30 UT''' No observations at all today, because of testing of the fast correlator.  The packet headers look better now, so we are getting close, but there are some issues with both number of X packets and the content of the packets, suggesting an issue with the FFT block.&lt;br /&gt;
&lt;br /&gt;
== Dec 06 ==&lt;br /&gt;
'''17:20 UT''' Normal solar observing has started.  Kjell tells me that the air conditioning stopped again, but he is cooling the room manually.&lt;br /&gt;
&lt;br /&gt;
'''18:34 UT''' It seems that none of the pointing observations we attempted over the weekend are any good, even though at least some were certainly supposed to be tracking, with correct settings.  As a test, I am grabbing a quick observation of 3C273--on source by 18:36 UT.  Okay, I just checked, and the data on 3C273 is just fine.  I cannot imagine what went wrong over the weekend.  I guess we have to make another attempt to do pointing, overnight.&lt;br /&gt;
&lt;br /&gt;
'''18:50 UT''' Back to solar observing.&lt;br /&gt;
&lt;br /&gt;
'''23:41 UT''' Start of 24-hour pointing measurements. Ant12 &amp;quot;to stow&amp;quot;. Wind 10 mph.&lt;br /&gt;
&lt;br /&gt;
== Dec 07 ==&lt;br /&gt;
'''00:00 UT''' Continue 4-hour pointing measurements.&lt;br /&gt;
&lt;br /&gt;
'''04:00 UT''' Strong wind 21 mph.&lt;/div&gt;</summary>
		<author><name>Mlukicheva</name></author>
	</entry>
	<entry>
		<id>http://ovsa.njit.edu//wiki/index.php?title=2016_December&amp;diff=641</id>
		<title>2016 December</title>
		<link rel="alternate" type="text/html" href="http://ovsa.njit.edu//wiki/index.php?title=2016_December&amp;diff=641"/>
		<updated>2016-12-06T23:46:41Z</updated>

		<summary type="html">&lt;p&gt;Mlukicheva: /* Dec 06 */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Dec 01 ==&lt;br /&gt;
'''12:52 UT''' For the past several days we have done normal solar observing, with some interruptions due to testing new fast-correlator designs.  The latter is showing improvement, but still some issues to resolve.  During the solar observations, there have been a few small events.  One M1-class flare occurred, but had almost no radio emission apparently.  Activity seems to be subsiding now.&lt;br /&gt;
&lt;br /&gt;
== Dec 02 ==&lt;br /&gt;
'''15:00 UT''' We made an attempt last night to do pointing measurements on Ant 14, but there were several problems.  The tunable LO was not switching, but mainly Ant 14 was not in the subarray!  After this was fixed, the wind was high, so observations were terminated early.  None of the observations will be useful.  We will proceed with normal solar observing.&lt;br /&gt;
&lt;br /&gt;
== Dec 03 ==&lt;br /&gt;
'''12:04 UT''' Another attempt is underway to do pointing measurements.  Early in the observations, it was discovered that the GPS clock had lost connection, which caused many subsystems to get confused.  By about 01:30 UT the system had been brought back into service, and data-taking had started.  Ant 12 is missing, because of a possibly spurious ''Lo Hard Limit'' in Azimuth (the current Azimuth is reading high, not low...). Also, overnight there have been many episodes of high wind, so many of the scans will be useless.  Right now I count 9 potentially good scans.  I also found the tunable LO saying ''Queue Overrun'', so tuning on band14 may not have been correct for awhile--not sure when this occurred.&lt;br /&gt;
&lt;br /&gt;
== Dec 04 ==&lt;br /&gt;
'''12:22 UT''' The wind was bad '''all day''' yesterday without let-up, hence there were virtually '''no''' good scans the entire day--maybe about 9 total.  I restarted the pointing schedule last night, for a third night of attempts to get good results, and the wind was good, '''but''' this morning I found three anomalies: Ant 14 Dec drive was stuck with a permit, Ant 14 VATTN was set at 5 dB (maybe not a killer), and Ant 14 DCMATTN was 0 (definitely a killer).  Also DCMAUTO was not off, which is very strange.  I have no idea how these settings get corrupted.  It may be that none of these data up to now are any good.  The scan starting 12:39 UT may be the first with any chance.&lt;br /&gt;
&lt;br /&gt;
'''14:30 UT''' The Ant 14 Dec drive spiked again, so it only successfully did one source.  I have reset it, but there appears to be some generic problem with the dish that results in Dec motor-current spikes.&lt;br /&gt;
&lt;br /&gt;
'''17:30 UT''' I checked again, and again the Ant 14 drive is stopped.  I reset it, but I do not know if we are getting any good data.  Perhaps I have to babysit it on each source change.&lt;br /&gt;
&lt;br /&gt;
'''21:51 UT''' For what it is worth, the observations from 17:30 UT to now should be good, with 4 more pointings to go.&lt;br /&gt;
&lt;br /&gt;
== Dec 05 ==&lt;br /&gt;
'''23:30 UT''' No observations at all today, because of testing of the fast correlator.  The packet headers look better now, so we are getting close, but there are some issues with both number of X packets and the content of the packets, suggesting an issue with the FFT block.&lt;br /&gt;
&lt;br /&gt;
== Dec 06 ==&lt;br /&gt;
'''17:20 UT''' Normal solar observing has started.  Kjell tells me that the air conditioning stopped again, but he is cooling the room manually.&lt;br /&gt;
&lt;br /&gt;
'''18:34 UT''' It seems that none of the pointing observations we attempted over the weekend are any good, even though at least some were certainly supposed to be tracking, with correct settings.  As a test, I am grabbing a quick observation of 3C273--on source by 18:36 UT.  Okay, I just checked, and the data on 3C273 is just fine.  I cannot imagine what went wrong over the weekend.  I guess we have to make another attempt to do pointing, overnight.&lt;br /&gt;
&lt;br /&gt;
'''18:50 UT''' Back to solar observing.&lt;br /&gt;
&lt;br /&gt;
'''23:41 UT''' Start of 24-hour pointing measurements. Ant12 &amp;quot;to stow&amp;quot;. Wind 10 mph.&lt;br /&gt;
&lt;br /&gt;
== Dec 07 ==&lt;br /&gt;
'''00:00 UT''' Continue 4-hour pointing measurements.&lt;/div&gt;</summary>
		<author><name>Mlukicheva</name></author>
	</entry>
	<entry>
		<id>http://ovsa.njit.edu//wiki/index.php?title=2016_December&amp;diff=640</id>
		<title>2016 December</title>
		<link rel="alternate" type="text/html" href="http://ovsa.njit.edu//wiki/index.php?title=2016_December&amp;diff=640"/>
		<updated>2016-12-06T23:45:42Z</updated>

		<summary type="html">&lt;p&gt;Mlukicheva: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Dec 01 ==&lt;br /&gt;
'''12:52 UT''' For the past several days we have done normal solar observing, with some interruptions due to testing new fast-correlator designs.  The latter is showing improvement, but still some issues to resolve.  During the solar observations, there have been a few small events.  One M1-class flare occurred, but had almost no radio emission apparently.  Activity seems to be subsiding now.&lt;br /&gt;
&lt;br /&gt;
== Dec 02 ==&lt;br /&gt;
'''15:00 UT''' We made an attempt last night to do pointing measurements on Ant 14, but there were several problems.  The tunable LO was not switching, but mainly Ant 14 was not in the subarray!  After this was fixed, the wind was high, so observations were terminated early.  None of the observations will be useful.  We will proceed with normal solar observing.&lt;br /&gt;
&lt;br /&gt;
== Dec 03 ==&lt;br /&gt;
'''12:04 UT''' Another attempt is underway to do pointing measurements.  Early in the observations, it was discovered that the GPS clock had lost connection, which caused many subsystems to get confused.  By about 01:30 UT the system had been brought back into service, and data-taking had started.  Ant 12 is missing, because of a possibly spurious ''Lo Hard Limit'' in Azimuth (the current Azimuth is reading high, not low...). Also, overnight there have been many episodes of high wind, so many of the scans will be useless.  Right now I count 9 potentially good scans.  I also found the tunable LO saying ''Queue Overrun'', so tuning on band14 may not have been correct for awhile--not sure when this occurred.&lt;br /&gt;
&lt;br /&gt;
== Dec 04 ==&lt;br /&gt;
'''12:22 UT''' The wind was bad '''all day''' yesterday without let-up, hence there were virtually '''no''' good scans the entire day--maybe about 9 total.  I restarted the pointing schedule last night, for a third night of attempts to get good results, and the wind was good, '''but''' this morning I found three anomalies: Ant 14 Dec drive was stuck with a permit, Ant 14 VATTN was set at 5 dB (maybe not a killer), and Ant 14 DCMATTN was 0 (definitely a killer).  Also DCMAUTO was not off, which is very strange.  I have no idea how these settings get corrupted.  It may be that none of these data up to now are any good.  The scan starting 12:39 UT may be the first with any chance.&lt;br /&gt;
&lt;br /&gt;
'''14:30 UT''' The Ant 14 Dec drive spiked again, so it only successfully did one source.  I have reset it, but there appears to be some generic problem with the dish that results in Dec motor-current spikes.&lt;br /&gt;
&lt;br /&gt;
'''17:30 UT''' I checked again, and again the Ant 14 drive is stopped.  I reset it, but I do not know if we are getting any good data.  Perhaps I have to babysit it on each source change.&lt;br /&gt;
&lt;br /&gt;
'''21:51 UT''' For what it is worth, the observations from 17:30 UT to now should be good, with 4 more pointings to go.&lt;br /&gt;
&lt;br /&gt;
== Dec 05 ==&lt;br /&gt;
'''23:30 UT''' No observations at all today, because of testing of the fast correlator.  The packet headers look better now, so we are getting close, but there are some issues with both number of X packets and the content of the packets, suggesting an issue with the FFT block.&lt;br /&gt;
&lt;br /&gt;
== Dec 06 ==&lt;br /&gt;
'''17:20 UT''' Normal solar observing has started.  Kjell tells me that the air conditioning stopped again, but he is cooling the room manually.&lt;br /&gt;
&lt;br /&gt;
'''18:34 UT''' It seems that none of the pointing observations we attempted over the weekend are any good, even though at least some were certainly supposed to be tracking, with correct settings.  As a test, I am grabbing a quick observation of 3C273--on source by 18:36 UT.  Okay, I just checked, and the data on 3C273 is just fine.  I cannot imagine what went wrong over the weekend.  I guess we have to make another attempt to do pointing, overnight.&lt;br /&gt;
&lt;br /&gt;
'''18:50 UT''' Back to solar observing.&lt;br /&gt;
&lt;br /&gt;
'''23:41 UT''' Start of 24-hour pointing measurements. Ant12 &amp;quot;to stow&amp;quot; &lt;br /&gt;
&lt;br /&gt;
== Dec 07 ==&lt;br /&gt;
'''00:00 UT''' Continue 4-hour pointing measurements.&lt;/div&gt;</summary>
		<author><name>Mlukicheva</name></author>
	</entry>
	<entry>
		<id>http://ovsa.njit.edu//wiki/index.php?title=Stateframe_Database&amp;diff=116</id>
		<title>Stateframe Database</title>
		<link rel="alternate" type="text/html" href="http://ovsa.njit.edu//wiki/index.php?title=Stateframe_Database&amp;diff=116"/>
		<updated>2016-09-20T18:49:51Z</updated>

		<summary type="html">&lt;p&gt;Mlukicheva: /* Initial setup of database access software */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
== Setup and Initial Test of Python Access to Existing Database ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Introduction ===&lt;br /&gt;
&lt;br /&gt;
This document describes the initial setup and access of the EOVSA Stateframe database from Python.  The original version of this document described the case of a database called “eOVSA05” that existed on a machine named “GARY-FY13NB” (laptop).  That connection string for connecting from the laptop was:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
cnxn = pyodbc.connect(“DRIVER={SQL Server}; SERVER=GARY-FY13NB; DATABASE=eOVSA05;Trusted_connection=yes;&amp;quot;)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
However, some adjustments were needed to allow connection to the actual server machine (EOVSASQL), using an IP address and “SQL Authentication.”  The trick was to create a user on the server in the SQL Server Management Studio (user name is &amp;lt;code&amp;gt;‘aaa’&amp;lt;/code&amp;gt;, password is &amp;lt;code&amp;gt;‘I@bsbn2w’&amp;lt;/code&amp;gt;), and then use the server IP address (currently ‘128.235.89.168’) on port 1433.  This port had to be opened in the server’s Windows Firewall.  The new connection string is:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
cnxn = pyodbc.connect(&amp;quot;DRIVER={SQL Server}; SERVER=128.235.89.168,1433; DATABASE=eOVSA06;UID=aaa;PWD=I@bsbn2w;&amp;quot;)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The other commands have been updated for the current situation, also.&lt;br /&gt;
As of 2014-Sep-30, the SQL Server has been set up at OVRO and the connection string from Helios is:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
cnxn = pyodbc.connect(&amp;quot;DRIVER={FreeTDS}; SERVER=192.168.24.106,1433; DATABASE=eOVSA06;UID=aaa;PWD=I@bsbn2w;&amp;quot;)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Initial setup of database access software ===&lt;br /&gt;
&lt;br /&gt;
The Python library to access Microsoft SQL Server is called pyodbc. This describes downloading the library and integrating it with an existing '''python2.76''' installation, making a first connection to the database, and executing a query.  The query assumes that database “eOVSA06” has a populated table “fV26_vD15.”&lt;br /&gt;
&lt;br /&gt;
1.	First install '''pyodbc''' ([https://code.google.com/p/pyodbc/downloads/list]). For instance, '''pyodbc-3.0.7.win-amd64-py2.7.exe'''.&lt;br /&gt;
&lt;br /&gt;
2.	Start '''ipython''' and type &lt;br /&gt;
	&amp;lt;pre&amp;gt;import pyodbc&amp;lt;/pre&amp;gt;&lt;br /&gt;
3.	Connect to database with&lt;br /&gt;
	&amp;lt;pre&amp;gt;cnxn = pyodbc.connect(&amp;quot;DRIVER={SQL Server}; SERVER=128.235.89.168,1433; DATABASE=eOVSA06; UID=aaa; PWD=I@bsbn2w;&amp;quot;)&amp;lt;/pre&amp;gt;&lt;br /&gt;
4.	Get a 'cursor' to the database:&lt;br /&gt;
	&amp;lt;pre&amp;gt;cursor = cnxn.cursor()&amp;lt;/pre&amp;gt;&lt;br /&gt;
5.	Send an SQL query:&lt;br /&gt;
	&amp;lt;pre&amp;gt;cursor.execute(&amp;quot;select top 16 * from fV26_vD15 order by Timestamp&amp;quot;)&amp;lt;/pre&amp;gt;&lt;br /&gt;
6.	Fetch the data returned by the query&lt;br /&gt;
	&amp;lt;pre&amp;gt;rows = cursor.fetchall()&amp;lt;/pre&amp;gt;&lt;br /&gt;
7.	rows now contains a list of 16 rows.  &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Note that setup on Linux (Ubuntu) is quite a bit more complicated.  Here are the steps:&lt;br /&gt;
&lt;br /&gt;
1.	Download '''pyodbc-3.0.7.zip''' from the above link (or copy from '''helios.solar.pvt''').&lt;br /&gt;
&lt;br /&gt;
2.	Install unixODBC and FreeTDS packages:&lt;br /&gt;
	&amp;lt;pre&amp;gt;sudo apt-get install unixODBC-dev freetds-dev freetds-bin tdsodbc&amp;lt;/pre&amp;gt;&lt;br /&gt;
3.	Edit '''/etc/freetds/freetds.conf''' (or copy from helios) to change the last 4 lines to&lt;br /&gt;
	&amp;lt;pre&amp;gt;&lt;br /&gt;
        [sqlserver]&lt;br /&gt;
	host = sqlserver.solar.pvt&lt;br /&gt;
	port = 1433&lt;br /&gt;
	tds version = 11.0&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
4.	Edit file '''/etc/odbc.ini''' (or copy from helios) to contain:&lt;br /&gt;
	&amp;lt;pre&amp;gt;&lt;br /&gt;
        [sql]&lt;br /&gt;
        Driver = FreeTDS&lt;br /&gt;
	Description = ODBC connection via FreeTDS&lt;br /&gt;
	Trace = No&lt;br /&gt;
	Servername = sqlserver.solar.pvt&lt;br /&gt;
	Database = eOVSA06&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
5.	Edit file '''/etc/odbcinst.ini''' (or copy from helios) to contain:&lt;br /&gt;
	&amp;lt;pre&amp;gt;&lt;br /&gt;
        [FreeTDS]&lt;br /&gt;
	Description     = TDS driver (Sybase/MS SQL)&lt;br /&gt;
	Driver = /usr/lib/x86_64-linux-gnu/odbc/libtdsodbc.so&lt;br /&gt;
	Setup = /usr/lib/x86_64-linux-gnu/odbc/libtdsS.so&lt;br /&gt;
	CPTimeout       =&lt;br /&gt;
	CPReuse         =&lt;br /&gt;
	FileUsage       = 1&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
6.	Unzip the zip file in step 1, cd to the directory '''pyodbc-3.0.7''', and type:&lt;br /&gt;
       &amp;lt;pre&amp;gt;	&lt;br /&gt;
        python setup.py build&lt;br /&gt;
	sudo python setup.py install&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
From there, use the connection string for Linux shown in the introduction section, and proceed from there.&lt;br /&gt;
&lt;br /&gt;
=== Examples of returned queries ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
cursor.execute(“select top 2 * from fV26_vD1 order by Timestamp”)&lt;br /&gt;
rows = cursor.fetchall()&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Returns the entire “dimension-1” data for the first two entries in the table, ordered by ''Timestamp''.  The contents can be accessed one row at a time (&amp;lt;code&amp;gt;rows[0]&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;rows[1]&amp;lt;/code&amp;gt; in this case), and the entry names can be listed with &amp;lt;code&amp;gt;rows[0].cursor_description&amp;lt;/code&amp;gt;.  One can access the value of, say, the current temperature via &amp;lt;code&amp;gt;rows[0].Sche_Data_Weat_Temperature&amp;lt;/code&amp;gt;:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
cursor.execute(“””select top 20 * from fV26_vD15 a where (a.[I15] % 15) = 0 order by Timestamp”””)&lt;br /&gt;
rows = cursor.fetchall()&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Returns the first 20 rows for Antenna 1 [via (&amp;lt;code&amp;gt;a.[Index] % 15) = 0&amp;lt;/code&amp;gt;] of the dimension-15 table, ordered by ''Timestamp''.  One can check that these are all for the same antenna, for example, by checking one of the pointing coefficients using the Python code:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
for row in rows: print row.Ante_Cont_PointingCoefficient2&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Create a new StateFrameDef Table entry ===&lt;br /&gt;
&lt;br /&gt;
Whenever a new version of the stateframe is created due to some change in the content of the stateframe, a new StateFrameDef table entry must be created.  This is accomplished via a set of SQL commands, like:&lt;br /&gt;
&amp;lt;pre&amp;gt;cursor.execute(“insert into StateFrameDef (Status, Version, Dimension, DataType,  FieldBytes,  DimOffset,  StartByte,  FieldNum,  FieldName) &lt;br /&gt;
                    values ( 0, 15, 15, 'u16', 2, 28, 759, 1, 'DCM_Slot')”)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and so on for the entire new table.  Once all of the rows of the table are entered, the command to create the tables for that version is:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
cursor.execute(“update StateFrameDef set status=1 where Version=’15’”) &amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
NB: Once a stateframedef entry is entered, it cannot be entered again within causing unrecoverable errors in the table.  See below for description of procedure to clear all of the tables and start ove&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Complete process of inserting a new binary record ===&lt;br /&gt;
&lt;br /&gt;
The structure of binary data for a given stateframe version is encoded in its XML file with the name ‘stateframe_vxx.00.xml’, where xx is the version number, i.e. 26.  The binary data has to be rearranged via a routine in '''stateframedef.py''' called &amp;lt;code&amp;gt;transmogrify()&amp;lt;/code&amp;gt;.  A complete recipe for reading and inserting data from a stateframe log file (‘sf_20140205_v26.0.log’ in this example), is as follows:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
import stateframedef as sfd&lt;br /&gt;
import pyodbc&lt;br /&gt;
sf, version = sfd.rxml.xml_ptrs(‘stateframe_v26.00.xml’)&lt;br /&gt;
brange, outlist = sfd.sfdef(sf)&lt;br /&gt;
f = open(‘sf_20140205_v26.0.log’,’rb’)&lt;br /&gt;
buf = f.read(32)&lt;br /&gt;
recsize = sfd.struct.unpack_from(‘i’, buf, 16)[0]&lt;br /&gt;
f.close()&lt;br /&gt;
cnxn = pyodbc.connect(&amp;quot;DRIVER={SQL Server};SERVER=128.235.89.168,1433; &lt;br /&gt;
DATABASE=eOVSA06;UID=aaa;PWD=I@bsbn2w;&amp;quot;)&lt;br /&gt;
cursor = cnxn.cursor()&lt;br /&gt;
with open(‘sf_20140205_v26.0.log’, ‘rb’) as f:&lt;br /&gt;
    bufin = f.read(recsize)&lt;br /&gt;
    bufout = sfd.transmogrify(bufin, brange)&lt;br /&gt;
    try:&lt;br /&gt;
        cursor.execute(‘insert into fBin (Bin) values (?)’,  &lt;br /&gt;
        pyodbc.Binary(bufout))&lt;br /&gt;
    except:&lt;br /&gt;
              pass&lt;br /&gt;
cnxn.commit()&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The execute line may return an error and fail, of course, especially if the data have previously been inserted, so it is enclosed in a try: except: clause so that everything does not immediately stop.  Error checking would go into the except: clause.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Procedure to create the information for a new stateframedef table ===&lt;br /&gt;
&lt;br /&gt;
The Python code in '''stateframedef.py''' does all of the manipulation related to creating StateFrameDef (and also ScanHeaderDef) tables as well as converting stateframe (or scanheader) binary data to the database data.  For various reasons involving the internals of SQL, it is necessary to reorder certain data (two-dimensional arrays) in the stateframe and scanheader before they can be saved in the SQL database.  These special cases are handled in the bowels of '''stateframedef.py'''’s &amp;lt;code&amp;gt;walk_keys()&amp;lt;/code&amp;gt; routine.&lt;br /&gt;
To create a stateframedef table from a new stateframe xml file, e.g. stateframe_v32.00.xml, do:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
import stateframedef as sfd&lt;br /&gt;
sf, version = sfd.rxml.xml_ptrs(‘stateframe_v32.00.xml’)&lt;br /&gt;
brange, outlist = sfd.sfdef(sf)&lt;br /&gt;
sfd.startbyte(outlist)&lt;br /&gt;
tbl = sfd.outlist2table(outlist,version)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This last line prints the table to the screen and also creates the contents of the table as commands for input to SQL.  To upload the table in SQL and activate it, the commands are:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
for line in tbl:&lt;br /&gt;
cursor.execute(line)&lt;br /&gt;
cursor.execute(“update StateFrameDef set status=1 where Version=’” +  &lt;br /&gt;
                str(int(version)) + ”’”)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== To close a connection ===&lt;br /&gt;
&lt;br /&gt;
Once a connection cxnx has been created, and a cursor has been defined, do the following to release them:&lt;br /&gt;
&amp;lt;pre&amp;gt;cursor.close()&lt;br /&gt;
del cursor&lt;br /&gt;
cnxn.close()&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== To clear all tables and start from scratch ===&lt;br /&gt;
&lt;br /&gt;
The SQL tables that describe each version of the stateframe or scanheader are called (case-insensitive) StateFrameDef and ScanHeaderDef.  If somehow these get confused (as in entering a line that is already in the table), any previously backed-up tables can be restored using:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
cursor.execute(“ov_fTEST_DefRestore”) &amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
However, if no appropriate backup exists, the tables need to be cleared and reloaded.  This is a very quick process, luckily.  To empty the “definition” tables, execute the following:&lt;br /&gt;
&amp;lt;pre&amp;gt;cursor.execute(“ov_fTEST_DefTruncate”)&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To reload them, it is intended to have a single function (not available yet):&lt;br /&gt;
&amp;lt;pre&amp;gt;reload_deftables(),&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
which will clear the tables and reload them automatically.  Right now, there is a routine that does this for one file:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
flag = load_deftable(xml_file),&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
which returns True if successful, or False if an error.  If a definition for the specified version already exists in the table, a warning is generated and the table is not redefined, but the routine returns True.  This avoids trying to redefine the table and thus messing it up.  The &amp;lt;code&amp;gt;reload_deftables()&amp;lt;/code&amp;gt; routine will just clear the tables, and then take all xml files in a directory and repeatedly call &amp;lt;code&amp;gt;load_deftable()&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Getting data across version boundaries ===&lt;br /&gt;
&lt;br /&gt;
The data for each version of the stateframe appears in unique tables, so that the information for one period of time may be in, for example, fV32_vD15, while for the next adjacent time it is in fV35_vD15 (no data were recorded for versions 33-34).  If one wants to get data that spans these two tables, one would use the following query (this example is for Ante_Cont_Elevation1):&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
cursor.execute(“””select Timestamp, Ante_Cont_Elevation1 &lt;br /&gt;
                  from fV32_vD15 a where (a.[i15] % 15) = 0 &lt;br /&gt;
                  union all &lt;br /&gt;
                  select Timestamp, Ante_Cont_Elevation1 &lt;br /&gt;
                  from fV35_vD15 b where (b.[i15] % 15) = 0 &lt;br /&gt;
                                       order by TimeStamp”””)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note that it is necessary to include in the select list the column (TimeStamp in this case) that is to be used for ordering the data.  Otherwise one gets a cryptic error message [42000] ORDER BY items must appear in the select list if the statement contains a UNION, INTERSECT or EXCEPT operator. Once the data are selected, the following lines will plot them:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
rows = cursor.fetchall()&lt;br /&gt;
elev = zeros(len(rows),'float')&lt;br /&gt;
times = zeros(len(rows),'float')&lt;br /&gt;
for i,x in enumerate(rows):&lt;br /&gt;
    times[i], elev[i] = x&lt;br /&gt;
plot(times-times[0],elev/10000.,'.')&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== New python code for accessing the database ===&lt;br /&gt;
&lt;br /&gt;
There is a module called dbutil that contains (and will further be developed) routines to access the database.  The two current routines are:&lt;br /&gt;
* &amp;lt;code&amp;gt;cursor = get_cursor()&amp;lt;/code&amp;gt;   Opens the database and returns a cursor for access to it.&lt;br /&gt;
* &amp;lt;code&amp;gt;mydict = get_dbrecs(cursor, version, dimension, timestamp, nrecs)&amp;lt;/code&amp;gt;   Takes as input an open cursor and version, dimension, timestamp and nrecs, and returns a dictionary from the table indicated by the version and dimension, starting at timestamp, and having nrecs entries.  The data in each mydict key has dimensions of dimension x nrecs.&lt;/div&gt;</summary>
		<author><name>Mlukicheva</name></author>
	</entry>
	<entry>
		<id>http://ovsa.njit.edu//wiki/index.php?title=Stateframe_Database&amp;diff=115</id>
		<title>Stateframe Database</title>
		<link rel="alternate" type="text/html" href="http://ovsa.njit.edu//wiki/index.php?title=Stateframe_Database&amp;diff=115"/>
		<updated>2016-09-20T18:45:47Z</updated>

		<summary type="html">&lt;p&gt;Mlukicheva: Created page with &amp;quot; == Setup and Initial Test of Python Access to Existing Database ==   === Introduction ===  This document describes the initial setup and access of the EOVSA Stateframe databa...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
== Setup and Initial Test of Python Access to Existing Database ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Introduction ===&lt;br /&gt;
&lt;br /&gt;
This document describes the initial setup and access of the EOVSA Stateframe database from Python.  The original version of this document described the case of a database called “eOVSA05” that existed on a machine named “GARY-FY13NB” (laptop).  That connection string for connecting from the laptop was:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
cnxn = pyodbc.connect(“DRIVER={SQL Server}; SERVER=GARY-FY13NB; DATABASE=eOVSA05;Trusted_connection=yes;&amp;quot;)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
However, some adjustments were needed to allow connection to the actual server machine (EOVSASQL), using an IP address and “SQL Authentication.”  The trick was to create a user on the server in the SQL Server Management Studio (user name is &amp;lt;code&amp;gt;‘aaa’&amp;lt;/code&amp;gt;, password is &amp;lt;code&amp;gt;‘I@bsbn2w’&amp;lt;/code&amp;gt;), and then use the server IP address (currently ‘128.235.89.168’) on port 1433.  This port had to be opened in the server’s Windows Firewall.  The new connection string is:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
cnxn = pyodbc.connect(&amp;quot;DRIVER={SQL Server}; SERVER=128.235.89.168,1433; DATABASE=eOVSA06;UID=aaa;PWD=I@bsbn2w;&amp;quot;)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The other commands have been updated for the current situation, also.&lt;br /&gt;
As of 2014-Sep-30, the SQL Server has been set up at OVRO and the connection string from Helios is:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
cnxn = pyodbc.connect(&amp;quot;DRIVER={FreeTDS}; SERVER=192.168.24.106,1433; DATABASE=eOVSA06;UID=aaa;PWD=I@bsbn2w;&amp;quot;)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Initial setup of database access software ===&lt;br /&gt;
&lt;br /&gt;
The Python library to access Microsoft SQL Server is called pyodbc. This describes downloading the library and integrating it with an existing '''python2.76''' installation, making a first connection to the database, and executing a query.  The query assumes that database “eOVSA06” has a populated table “fV26_vD15.”&lt;br /&gt;
&lt;br /&gt;
1.	First install '''pyodbc''' ([https://code.google.com/p/pyodbc/downloads/list]). I installed '''pyodbc-3.0.7.win-amd64-py2.7.exe'''.&lt;br /&gt;
&lt;br /&gt;
2.	Start '''ipython''' and type &lt;br /&gt;
	&amp;lt;pre&amp;gt;import pyodbc&amp;lt;/pre&amp;gt;&lt;br /&gt;
3.	Connect to database with&lt;br /&gt;
	&amp;lt;pre&amp;gt;cnxn = pyodbc.connect(&amp;quot;DRIVER={SQL Server}; SERVER=128.235.89.168,1433; DATABASE=eOVSA06; UID=aaa; PWD=I@bsbn2w;&amp;quot;)&amp;lt;/pre&amp;gt;&lt;br /&gt;
4.	Get a 'cursor' to the database:&lt;br /&gt;
	&amp;lt;pre&amp;gt;cursor = cnxn.cursor()&amp;lt;/pre&amp;gt;&lt;br /&gt;
5.	Send an SQL query:&lt;br /&gt;
	&amp;lt;pre&amp;gt;cursor.execute(&amp;quot;select top 16 * from fV26_vD15 order by Timestamp&amp;quot;)&amp;lt;/pre&amp;gt;&lt;br /&gt;
6.	Fetch the data returned by the query&lt;br /&gt;
	&amp;lt;pre&amp;gt;rows = cursor.fetchall()&amp;lt;/pre&amp;gt;&lt;br /&gt;
7.	rows now contains a list of 16 rows.  &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Note that setup on Linux (Ubuntu) is quite a bit more complicated.  Here are the steps:&lt;br /&gt;
&lt;br /&gt;
1.	Download '''pyodbc-3.0.7.zip''' from the above link (or copy from '''helios.solar.pvt''').&lt;br /&gt;
&lt;br /&gt;
2.	Install unixODBC and FreeTDS packages:&lt;br /&gt;
	&amp;lt;pre&amp;gt;sudo apt-get install unixODBC-dev freetds-dev freetds-bin tdsodbc&amp;lt;/pre&amp;gt;&lt;br /&gt;
3.	Edit '''/etc/freetds/freetds.conf''' (or copy from helios) to change the last 4 lines to&lt;br /&gt;
	&amp;lt;pre&amp;gt;&lt;br /&gt;
        [sqlserver]&lt;br /&gt;
	host = sqlserver.solar.pvt&lt;br /&gt;
	port = 1433&lt;br /&gt;
	tds version = 11.0&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
4.	Edit file '''/etc/odbc.ini''' (or copy from helios) to contain:&lt;br /&gt;
	&amp;lt;pre&amp;gt;&lt;br /&gt;
        [sql]&lt;br /&gt;
        Driver = FreeTDS&lt;br /&gt;
	Description = ODBC connection via FreeTDS&lt;br /&gt;
	Trace = No&lt;br /&gt;
	Servername = sqlserver.solar.pvt&lt;br /&gt;
	Database = eOVSA06&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
5.	Edit file '''/etc/odbcinst.ini''' (or copy from helios) to contain:&lt;br /&gt;
	&amp;lt;pre&amp;gt;&lt;br /&gt;
        [FreeTDS]&lt;br /&gt;
	Description     = TDS driver (Sybase/MS SQL)&lt;br /&gt;
	Driver = /usr/lib/x86_64-linux-gnu/odbc/libtdsodbc.so&lt;br /&gt;
	Setup = /usr/lib/x86_64-linux-gnu/odbc/libtdsS.so&lt;br /&gt;
	CPTimeout       =&lt;br /&gt;
	CPReuse         =&lt;br /&gt;
	FileUsage       = 1&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
6.	Unzip the zip file in step 1, cd to the directory '''pyodbc-3.0.7''', and type:&lt;br /&gt;
       &amp;lt;pre&amp;gt;	&lt;br /&gt;
        python setup.py build&lt;br /&gt;
	sudo python setup.py install&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
From there, use the connection string for Linux shown in the introduction section, and proceed from there.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Examples of returned queries ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
cursor.execute(“select top 2 * from fV26_vD1 order by Timestamp”)&lt;br /&gt;
rows = cursor.fetchall()&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Returns the entire “dimension-1” data for the first two entries in the table, ordered by ''Timestamp''.  The contents can be accessed one row at a time (&amp;lt;code&amp;gt;rows[0]&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;rows[1]&amp;lt;/code&amp;gt; in this case), and the entry names can be listed with &amp;lt;code&amp;gt;rows[0].cursor_description&amp;lt;/code&amp;gt;.  One can access the value of, say, the current temperature via &amp;lt;code&amp;gt;rows[0].Sche_Data_Weat_Temperature&amp;lt;/code&amp;gt;:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
cursor.execute(“””select top 20 * from fV26_vD15 a where (a.[I15] % 15) = 0 order by Timestamp”””)&lt;br /&gt;
rows = cursor.fetchall()&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Returns the first 20 rows for Antenna 1 [via (&amp;lt;code&amp;gt;a.[Index] % 15) = 0&amp;lt;/code&amp;gt;] of the dimension-15 table, ordered by ''Timestamp''.  One can check that these are all for the same antenna, for example, by checking one of the pointing coefficients using the Python code:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
for row in rows: print row.Ante_Cont_PointingCoefficient2&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Create a new StateFrameDef Table entry ===&lt;br /&gt;
&lt;br /&gt;
Whenever a new version of the stateframe is created due to some change in the content of the stateframe, a new StateFrameDef table entry must be created.  This is accomplished via a set of SQL commands, like:&lt;br /&gt;
&amp;lt;pre&amp;gt;cursor.execute(“insert into StateFrameDef (Status, Version, Dimension, DataType,  FieldBytes,  DimOffset,  StartByte,  FieldNum,  FieldName) &lt;br /&gt;
                    values ( 0, 15, 15, 'u16', 2, 28, 759, 1, 'DCM_Slot')”)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and so on for the entire new table.  Once all of the rows of the table are entered, the command to create the tables for that version is:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
cursor.execute(“update StateFrameDef set status=1 where Version=’15’”) &amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
NB: Once a stateframedef entry is entered, it cannot be entered again within causing unrecoverable errors in the table.  See below for description of procedure to clear all of the tables and start ove&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Complete process of inserting a new binary record ===&lt;br /&gt;
&lt;br /&gt;
The structure of binary data for a given stateframe version is encoded in its XML file with the name ‘stateframe_vxx.00.xml’, where xx is the version number, i.e. 26.  The binary data has to be rearranged via a routine in '''stateframedef.py''' called &amp;lt;code&amp;gt;transmogrify()&amp;lt;/code&amp;gt;.  A complete recipe for reading and inserting data from a stateframe log file (‘sf_20140205_v26.0.log’ in this example), is as follows:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
import stateframedef as sfd&lt;br /&gt;
import pyodbc&lt;br /&gt;
sf, version = sfd.rxml.xml_ptrs(‘stateframe_v26.00.xml’)&lt;br /&gt;
brange, outlist = sfd.sfdef(sf)&lt;br /&gt;
f = open(‘sf_20140205_v26.0.log’,’rb’)&lt;br /&gt;
buf = f.read(32)&lt;br /&gt;
recsize = sfd.struct.unpack_from(‘i’, buf, 16)[0]&lt;br /&gt;
f.close()&lt;br /&gt;
cnxn = pyodbc.connect(&amp;quot;DRIVER={SQL Server};SERVER=128.235.89.168,1433; &lt;br /&gt;
DATABASE=eOVSA06;UID=aaa;PWD=I@bsbn2w;&amp;quot;)&lt;br /&gt;
cursor = cnxn.cursor()&lt;br /&gt;
with open(‘sf_20140205_v26.0.log’, ‘rb’) as f:&lt;br /&gt;
    bufin = f.read(recsize)&lt;br /&gt;
    bufout = sfd.transmogrify(bufin, brange)&lt;br /&gt;
    try:&lt;br /&gt;
        cursor.execute(‘insert into fBin (Bin) values (?)’,  &lt;br /&gt;
        pyodbc.Binary(bufout))&lt;br /&gt;
    except:&lt;br /&gt;
              pass&lt;br /&gt;
cnxn.commit()&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The execute line may return an error and fail, of course, especially if the data have previously been inserted, so it is enclosed in a try: except: clause so that everything does not immediately stop.  Error checking would go into the except: clause.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Procedure to create the information for a new stateframedef table ===&lt;br /&gt;
&lt;br /&gt;
The Python code in '''stateframedef.py''' does all of the manipulation related to creating StateFrameDef (and also ScanHeaderDef) tables as well as converting stateframe (or scanheader) binary data to the database data.  For various reasons involving the internals of SQL, it is necessary to reorder certain data (two-dimensional arrays) in the stateframe and scanheader before they can be saved in the SQL database.  These special cases are handled in the bowels of '''stateframedef.py'''’s &amp;lt;code&amp;gt;walk_keys()&amp;lt;/code&amp;gt; routine.&lt;br /&gt;
To create a stateframedef table from a new stateframe xml file, e.g. stateframe_v32.00.xml, do:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
import stateframedef as sfd&lt;br /&gt;
sf, version = sfd.rxml.xml_ptrs(‘stateframe_v32.00.xml’)&lt;br /&gt;
brange, outlist = sfd.sfdef(sf)&lt;br /&gt;
sfd.startbyte(outlist)&lt;br /&gt;
tbl = sfd.outlist2table(outlist,version)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This last line prints the table to the screen and also creates the contents of the table as commands for input to SQL.  To upload the table in SQL and activate it, the commands are:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
for line in tbl:&lt;br /&gt;
cursor.execute(line)&lt;br /&gt;
cursor.execute(“update StateFrameDef set status=1 where Version=’” +  &lt;br /&gt;
                str(int(version)) + ”’”)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== To close a connection ===&lt;br /&gt;
&lt;br /&gt;
Once a connection cxnx has been created, and a cursor has been defined, do the following to release them:&lt;br /&gt;
&amp;lt;pre&amp;gt;cursor.close()&lt;br /&gt;
del cursor&lt;br /&gt;
cnxn.close()&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== To clear all tables and start from scratch ===&lt;br /&gt;
&lt;br /&gt;
The SQL tables that describe each version of the stateframe or scanheader are called (case-insensitive) StateFrameDef and ScanHeaderDef.  If somehow these get confused (as in entering a line that is already in the table), any previously backed-up tables can be restored using:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
cursor.execute(“ov_fTEST_DefRestore”) &amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
However, if no appropriate backup exists, the tables need to be cleared and reloaded.  This is a very quick process, luckily.  To empty the “definition” tables, execute the following:&lt;br /&gt;
&amp;lt;pre&amp;gt;cursor.execute(“ov_fTEST_DefTruncate”)&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To reload them, it is intended to have a single function (not available yet):&lt;br /&gt;
&amp;lt;pre&amp;gt;reload_deftables(),&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
which will clear the tables and reload them automatically.  Right now, there is a routine that does this for one file:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
flag = load_deftable(xml_file),&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
which returns True if successful, or False if an error.  If a definition for the specified version already exists in the table, a warning is generated and the table is not redefined, but the routine returns True.  This avoids trying to redefine the table and thus messing it up.  The &amp;lt;code&amp;gt;reload_deftables()&amp;lt;/code&amp;gt; routine will just clear the tables, and then take all xml files in a directory and repeatedly call &amp;lt;code&amp;gt;load_deftable()&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Getting data across version boundaries ===&lt;br /&gt;
&lt;br /&gt;
The data for each version of the stateframe appears in unique tables, so that the information for one period of time may be in, for example, fV32_vD15, while for the next adjacent time it is in fV35_vD15 (no data were recorded for versions 33-34).  If one wants to get data that spans these two tables, one would use the following query (this example is for Ante_Cont_Elevation1):&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
cursor.execute(“””select Timestamp, Ante_Cont_Elevation1 &lt;br /&gt;
                  from fV32_vD15 a where (a.[i15] % 15) = 0 &lt;br /&gt;
                  union all &lt;br /&gt;
                  select Timestamp, Ante_Cont_Elevation1 &lt;br /&gt;
                  from fV35_vD15 b where (b.[i15] % 15) = 0 &lt;br /&gt;
                                       order by TimeStamp”””)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note that it is necessary to include in the select list the column (TimeStamp in this case) that is to be used for ordering the data.  Otherwise one gets a cryptic error message [42000] ORDER BY items must appear in the select list if the statement contains a UNION, INTERSECT or EXCEPT operator. Once the data are selected, the following lines will plot them:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
rows = cursor.fetchall()&lt;br /&gt;
elev = zeros(len(rows),'float')&lt;br /&gt;
times = zeros(len(rows),'float')&lt;br /&gt;
for i,x in enumerate(rows):&lt;br /&gt;
    times[i], elev[i] = x&lt;br /&gt;
plot(times-times[0],elev/10000.,'.')&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== New python code for accessing the database ===&lt;br /&gt;
&lt;br /&gt;
There is a module called dbutil that contains (and will further be developed) routines to access the database.  The two current routines are:&lt;br /&gt;
* &amp;lt;code&amp;gt;cursor = get_cursor()&amp;lt;/code&amp;gt;   Opens the database and returns a cursor for access to it.&lt;br /&gt;
* &amp;lt;code&amp;gt;mydict = get_dbrecs(cursor, version, dimension, timestamp, nrecs)&amp;lt;/code&amp;gt;   Takes as input an open cursor and version, dimension, timestamp and nrecs, and returns a dictionary from the table indicated by the version and dimension, starting at timestamp, and having nrecs entries.  The data in each mydict key has dimensions of dimension x nrecs.&lt;/div&gt;</summary>
		<author><name>Mlukicheva</name></author>
	</entry>
	<entry>
		<id>http://ovsa.njit.edu//wiki/index.php?title=Calibration_Database&amp;diff=114</id>
		<title>Calibration Database</title>
		<link rel="alternate" type="text/html" href="http://ovsa.njit.edu//wiki/index.php?title=Calibration_Database&amp;diff=114"/>
		<updated>2016-09-20T17:10:11Z</updated>

		<summary type="html">&lt;p&gt;Mlukicheva: Created page with &amp;quot;== Description and Use of the EOVSA Calibration Database ==   === Background ===  We have created a general-purpose table in the SQL-Server database ''eOVSA06'', named ''abin'...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Description and Use of the EOVSA Calibration Database ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Background ===&lt;br /&gt;
&lt;br /&gt;
We have created a general-purpose table in the SQL-Server database ''eOVSA06'', named ''abin'', which is used to hold binary calibration data in a general format given by an XML format string in the same table. The table is meant to be extendable to any calibration type, although it remains to be seen whether it is general enough to handle all use cases. This document describes the scheme, the format of the ''abin'' entries, and the list of currently defined binary types (this will have to be updated on a regular basis as new definitions are added).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Description of the General Scheme ===&lt;br /&gt;
&lt;br /&gt;
The general idea is to create entries into the ''abin'' table that are self-describing and completely general.&lt;br /&gt;
&lt;br /&gt;
The table columns are:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;['Bin';, 'Timestamp', 'Version', 'Id', 'Description']&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The ''Id'' number is auto-incremented to be unique to each record, and is never set by the user. Each type definition will appear in the table with an ''n''.0 ''Version'' number (float), and whenever it is updated, a new n.0 record is written with the current ''Timestamp''. This provides a history, with the corresponding ''Timestamp'' giving the start timerange of applicability (actually regretting that this key is called ''Version'', since its purpose could more accurately be referred to as the calibration ''Type''). To distinguish between this key and the true versions given within the type definition record, the latter is referred as the “internal version.” The ''Bin'' column contains an XML data description that is to be used to decode the data. The ''Version'' (type) number ''n'' will be unique for each calibration type, so that records with ''Version'' = 1.0, for example, will always contain the latest definition for a particular type of data defined as type 1 (the type of calibration data is further described in the ''Description'' column). The type definitions, as well as helper routines for creating, reading, and writing records is found in the Python module '''cal_header.py'''.&lt;br /&gt;
&lt;br /&gt;
The XML data itself, found in the ''Bin'' column of a ''Version n''.0 record, contains an internal version variable that gives a further record of the version of the XML format. As a concrete example, the latest ''Version'' 4.0 (delay centers) calibration will contain an XML string that includes its own internal version variable, say its value is 2.1, that would distinguish it from an earlier type 4.0 version. This internal version number is used by the &amp;lt;code&amp;gt;send_xml2sql()&amp;lt;/code&amp;gt; to determine whether a definition defined in '''cal_header.py''' has changed and needs to be written to the ''abin'' table.&lt;br /&gt;
&lt;br /&gt;
After (never before) the defining ''n''.0 record is written, subsequent records of that type can be written containing the binary calibration data, which will be decoded using the defining XML string. Thus, after writing the latest ''Version'' 4.0 format record, subsequent records with Version 4.1 (type 4, with internal version 1.0) can be written that will be decoded using that latest 4.0 XML string. Other versions, e.g. 4.2 (internal version 2.0) etc., could in principle be written, although it is not clear why that would be needed (perhaps an important change to the contents, but without a corresponding change to the format, could be indicated with a new 4.x version number). Thus, the latest &amp;lt;code&amp;gt;delay_centers&amp;lt;/code&amp;gt; entry can be read with a query like:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;SELECT TOP 1 * FROM abin WHERE Version &amp;gt; 4.0 AND Version &amp;lt; 5.0 ORDER BY Timestamp DESC&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
while the &amp;lt;code&amp;gt;delay_centers&amp;lt;/code&amp;gt; entry for a given ''Timestamp tstamp'' can be read with a query like:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;SELECT TOP 1 * FROM abin WHERE Version &amp;gt; 4.0 AND Version &amp;lt; 5.0 AND Timestamp &amp;lt;= tstamp ORDER BY Timestamp DESC&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In testing this, it was discovered that binary records returned by such a query are limited in length to 4096. To get an arbitrarily long record, one must prepend the string “SET TEXTSIZE 2147483647” to the query. Note that such details are already handled by the helper routine &amp;lt;code&amp;gt;read_cal()&amp;lt;/code&amp;gt; in '''cal_header.py'''. As new calibration types are created, their definitions will be added to '''cal_header.py''', both by updating the &amp;lt;code&amp;gt;cal_types()&amp;lt;/code&amp;gt; routine to add the new type’s ''Version'' number and ''Description'', and by adding a two writing routines—one called &amp;lt;code&amp;gt;type&amp;gt;2xml()&amp;lt;/code&amp;gt; routine that returns the XML description of the data (later written into the database by &amp;lt;code&amp;gt;send_xml2sql()&amp;lt;/code&amp;gt;), and one called &amp;lt;code&amp;gt;&amp;lt;type&amp;gt;2sql()&amp;lt;/code&amp;gt; that converts the calibration data to a binary buffer and writes it into the database, where &amp;lt;code&amp;gt;&amp;lt;ype&amp;gt;&amp;lt;/code&amp;gt; is a hopefully rational name for the new type. As new formats for an existing type are created, it should be fine to simply update the &amp;lt;code&amp;gt;cal_types()&amp;lt;/code&amp;gt; routine to change the description (if needed) and update the format embodied in the &amp;lt;code&amp;gt;&amp;lt;type&amp;gt;2xml()&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;&amp;lt;type&amp;gt;2sql()&amp;lt;/code&amp;gt;&lt;br /&gt;
routines. It should not be necessary to keep the old format, since the database itself already forms a history. Of course, any previous versions of the '''cal_header.py''' file will also be kept in the '''github''' versioning system.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Currently-Defined Types ===&lt;br /&gt;
&lt;br /&gt;
This section will hopefully be updated whenever new types are added, to provide a list of currently-defined calibration data types. However, it is probably wise to consult the '''cal_header.py''' file to verify the current definitions. Here is the verbatim return statement from &amp;lt;code&amp;gt;cal_types()&amp;lt;/code&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
return {1:['Total power calibration (output of SOLPNTCAL)','proto_tpcal2xml',1.0],&lt;br /&gt;
&lt;br /&gt;
        2:['DCM master base attenuation table [units=dB]','dcm_master_table2xml',1.0],&lt;br /&gt;
&lt;br /&gt;
        3:['DCM base attenuation table [units=dB]','dcm_table2xml',1.0],&lt;br /&gt;
&lt;br /&gt;
        4:['Delay centers [units=ns]','dlacen2xml',1.0]}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To add a new type, simply add another entry to this dictionary, with a unique type number, and a three-element list whose first element is the ''Description'' string, second element is the string name of the routine to call to create the XML definition (returns a binary buffer ready for writing to the ''abin'' table), and third element is the version number. Then add the corresponding &amp;lt;code&amp;gt;type&amp;gt;2xml()&amp;lt;/code&amp;gt; routine defining the format of the binary data, and the &amp;lt;code&amp;gt;&amp;lt;type&amp;gt;2sql()&amp;lt;/code&amp;gt; routine that converts the calibration data to a corresponding binary buffer. The '''cal_header.py''' module includes a routine &amp;lt;code&amp;gt;send_xml2sql()&amp;lt;/code&amp;gt;, which can be called at any time and checks the latest version of each calibration type in the ''abin'' table, and updates any that have changed (i.e. has a different version number than the latest one in the table). The return statement of each &amp;lt;code&amp;gt;&amp;lt;type&amp;gt;2sql()&amp;lt;/code&amp;gt; routine should call &amp;lt;code&amp;gt;write_cal()&amp;lt;/code&amp;gt; to actually write the binary buffer to the database, so that a single call to the routine does everything. It is anticipated that routines that create the calibration data will call the corresponding &amp;lt;code&amp;gt;&amp;lt;type&amp;gt;2sql()&amp;lt;/code&amp;gt; routine directly.&lt;br /&gt;
&lt;br /&gt;
To change an existing type, change the description in the cal_types() routine, if desired, and change the corresponding &amp;lt;code&amp;gt;&amp;lt;type&amp;gt;2xml()&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;&amp;lt;type&amp;gt;2sql()&amp;lt;/code&amp;gt; routines to create the new definition. It should not be strictly necessary to increment the version number that will be written into the XML description, unless two active versions are needed at the same time. It is up to the programmer to decide whether to increment the version’s minor (fractional) or major (integer) part of the version number, since only its uniqueness is required.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Reading Back Data for a given Calibration Type ===&lt;br /&gt;
&lt;br /&gt;
If the above scheme is followed, it should be possible to use a single, general routine to find and successfully read the binary calibration data for a given time. The &amp;lt;code&amp;gt;read_cal()&amp;lt;/code&amp;gt; routine in the '''cal_header.py''' module does this, returning a Python dictionary and the binary buffer. The dictionary contains key, value pairs defining the variable names (keys) and the types and start location (values) in the binary buffer. To use these returned entities, one employs the &amp;lt;code&amp;gt;extract()&amp;lt;/code&amp;gt; routine defined in the '''stateframe.py''' module, e.g. to read the total power (type 1) calibration factors for antenna 5 on April 3, 2016 as of 20:00 UT:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
import stateframe, util&lt;br /&gt;
&lt;br /&gt;
tp, buf = read_cal(1, t=util.Time(‘2016-04- 03 20:00’))&lt;br /&gt;
&lt;br /&gt;
calfac = stateframe.extract(buf,tp[‘Antenna’][4][‘Calfac’])&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Here the index for antenna 5 is 4, since it is a zero-based index. Note that to read the values for the current time, the input ''t'' can be omitted.&lt;/div&gt;</summary>
		<author><name>Mlukicheva</name></author>
	</entry>
</feed>