DG Notes on 20170904 Event: Difference between revisions

From EOVSA Wiki
Jump to navigation Jump to search
No edit summary
Line 7: Line 7:
To know the exact time of interest takes some iteration, especially for such a long flare with several peaks and smaller events.  After some checking, I settled on the three 10-min files starting at 20:23:33, 20:33:33, and 20:43:33 UT.  Undoubtedly an additional one or two 10-min intervals after the end time of 20:53:33 would be useful for following the decay of the event, but in the interests of time analysis of these will be postponed.
To know the exact time of interest takes some iteration, especially for such a long flare with several peaks and smaller events.  After some checking, I settled on the three 10-min files starting at 20:23:33, 20:33:33, and 20:43:33 UT.  Undoubtedly an additional one or two 10-min intervals after the end time of 20:53:33 would be useful for following the decay of the event, but in the interests of time analysis of these will be postponed.


== Applying Corrections to IDB Files ==
== Importing EOVSA Data ==
The first step is to apply two corrections to the IDB files: 1) Correct for differential feed rotation among the different types of antenna mounts, and 2) correct for attenuation changes during the burstFor this event, the following python commands do this
Once the desired timerange is known, the next step is to convert the raw IDB files to calibrated CASA ms files.  The script below does this from within CASA, and must be run on pipeline, which has access to the raw data and the SQL database.  This script may require 30 minutes or more to run.
  In [1]: import glob
from suncasa.tasks import task_calibeovsa as calibeovsa
  In [2]: files = glob.glob('/data1/eovsa/fits/IDB/20170904/IDB*20?3*')
  from suncasa.tasks import task_importeovsa as timporteovsa
  In [3]: files.sort()
import dump_tsys as dt
  In [4]: files
from util import Time
  Out[4]:
import numpy as np
  ['/data1/eovsa/fits/IDB/20170904/IDB20170904200333',
import os
  '/data1/eovsa/fits/IDB/20170904/IDB20170904201333',
  # import copy
  '/data1/eovsa/fits/IDB/20170904/IDB20170904202333',
get_tp = True
  '/data1/eovsa/fits/IDB/20170904/IDB20170904203333',
  workpath = '/data1/dgary/solar/20170904_Mflare/'
  '/data1/eovsa/fits/IDB/20170904/IDB20170904204333',
if not os.path.exists(workpath):
  '/data1/eovsa/fits/IDB/20170904/IDB20170904205333']
    os.makedirs(workpath)
  In [5]: import pipeline_cal as pc
  os.chdir(workpath)
  In [6]: for file in files[2:5]:
  # This time range will read files IDB20170904202333, IDB20170904203333, IDB20170904204333, and IDB20170904205333
    ...:     pc.udb_corr([file])
  trange = Time(['2017-09-04 20:20:00', '2017-09-04 21:00:00'])
    ...:
info = dt.rd_fdb(Time('2017-09-04'))
sidx = np.where(
    np.logical_and(info['SOURCEID'] == 'Sun', info['PROJECTID'] == 'NormalObserving') & np.logical_and(
        info['ST_TS'].astype(np.float) >= trange[0].lv,
        info['ST_TS'].astype(np.float) <= trange[1].lv))
filelist = info['FILE'][sidx]
outpath = 'msdata/'
if not os.path.exists(outpath):
    os.makedirs(outpath)
inpath = '/data1/eovsa/fits/IDB/{}/'.format(trange[0].datetime.strftime("%Y%m%d"))
ncpu = 1
#
#
  timporteovsa.importeovsa(idbfiles=[inpath + ll for ll in filelist], ncpu=ncpu, timebin="0s", width=1, visprefix=outpath,
                          nocreatms=False,
                          doconcat=False, modelms="", doscaling=False, keep_nsclms=False, udb_corr=True)
  msfiles = [outpath + ll + '.ms' for ll in filelist]
# invis=copy.copy(msfiles)
concatvis = os.path.basename(msfiles[0])[:11] + '_2023-2053.ms'
vis = calibeovsa.calibeovsa(msfiles, caltype=['refpha', 'phacal'], interp='nearest', doflag=True, flagant='13~15',
                            doimage=False, doconcat=True,
                            msoutdir='msdata', concatvis=concatvis, keep_orig_ms=True)


This will result in the three files, IDB20170904202333, IDB20170904203333, and IDB20170904204333 being corrected and written to the current directory.  Each file takes roughly 2 minutes to process.
This will result in the three files, IDB20170904202333, IDB20170904203333, and IDB20170904204333 being corrected and written to the current directory.  Each file takes roughly 2 minutes to process.

Revision as of 18:26, 5 November 2018

Purpose

There are various scripts for doing the imaging and self calibration, but they require explanation on how to use them. This page is meant to provide step-by-step explanation on analyzing one event, the M5.5 flare that occurred on 2017-Sep-04. I earlier did an initial study of this event, but in a hurried fashion in order to use the results for a proposal (which was successful, by the way!). I would like to do a more careful job with the initial calibration and analysis, and then continue with a second round of self calibration.

Initial Preparation

The data begin as raw data in Miriad format, recorded to disk by the pipeline system, in the form of IDB files. These files each contain 10-min of data, with several such files together representing a scan. The easiest way to identify the time period of interest is to examine the overview spectral plots on the RHESSI Browser and check "EOVSA Radio Data" on the upper left. Use the time you identified to find the corresponding IDB file(s) under /data1/eovsa/fits/IDB/yyyymmdd/ (also accessible from the EOVSA web site. The scan containing this flare is a quite long one, starting 18:53:33 UT and continuing to 21:30:00 UT or so.

To know the exact time of interest takes some iteration, especially for such a long flare with several peaks and smaller events. After some checking, I settled on the three 10-min files starting at 20:23:33, 20:33:33, and 20:43:33 UT. Undoubtedly an additional one or two 10-min intervals after the end time of 20:53:33 would be useful for following the decay of the event, but in the interests of time analysis of these will be postponed.

Importing EOVSA Data

Once the desired timerange is known, the next step is to convert the raw IDB files to calibrated CASA ms files. The script below does this from within CASA, and must be run on pipeline, which has access to the raw data and the SQL database. This script may require 30 minutes or more to run.

from suncasa.tasks import task_calibeovsa as calibeovsa
from suncasa.tasks import task_importeovsa as timporteovsa
import dump_tsys as dt
from util import Time
import numpy as np
import os
# import copy
get_tp = True
workpath = '/data1/dgary/solar/20170904_Mflare/'
if not os.path.exists(workpath):
    os.makedirs(workpath)
os.chdir(workpath)
# This time range will read files IDB20170904202333, IDB20170904203333, IDB20170904204333, and IDB20170904205333
trange = Time(['2017-09-04 20:20:00', '2017-09-04 21:00:00'])
info = dt.rd_fdb(Time('2017-09-04'))
sidx = np.where(
    np.logical_and(info['SOURCEID'] == 'Sun', info['PROJECTID'] == 'NormalObserving') & np.logical_and(
        info['ST_TS'].astype(np.float) >= trange[0].lv,
        info['ST_TS'].astype(np.float) <= trange[1].lv))
filelist = info['FILE'][sidx]
outpath = 'msdata/'
if not os.path.exists(outpath):
    os.makedirs(outpath)
inpath = '/data1/eovsa/fits/IDB/{}/'.format(trange[0].datetime.strftime("%Y%m%d"))
ncpu = 1
#
#
timporteovsa.importeovsa(idbfiles=[inpath + ll for ll in filelist], ncpu=ncpu, timebin="0s", width=1, visprefix=outpath,
                         nocreatms=False,
                         doconcat=False, modelms="", doscaling=False, keep_nsclms=False, udb_corr=True)
msfiles = [outpath + ll + '.ms' for ll in filelist]
# invis=copy.copy(msfiles)
concatvis = os.path.basename(msfiles[0])[:11] + '_2023-2053.ms'
vis = calibeovsa.calibeovsa(msfiles, caltype=['refpha', 'phacal'], interp='nearest', doflag=True, flagant='13~15',
                            doimage=False, doconcat=True,
                            msoutdir='msdata', concatvis=concatvis, keep_orig_ms=True)

This will result in the three files, IDB20170904202333, IDB20170904203333, and IDB20170904204333 being corrected and written to the current directory. Each file takes roughly 2 minutes to process.