FenStruct module

FenStruct is a module that gather all functions that parse main FENICS tests folder data searching for FastResults.json and SlowResults.json

Functions

FENICS1 data explorer

FenStruct.fen1_data_extractor

func FenStruct.fen1_data_extractor(mainDataDir, burnsTablePath) source

Function that crawl a given directory for FENICS1 boards test data files with a given burntime table csv file. It returns them in a FenDataFrame.

Important It is highly recommended to use this function to gather FENICS1 data, then save it as a json file then use FenLoad.FenicsData_read(path) to read it back.

Important There is no updating function to update FENICS DataFrames with new tests, a new scan of all the files with this function is needed.

parameters :

mainDataDir (string) :

path to the main directory to crawl for FENICS data.

burnsTablePath (string) :

path to a CSV file containing the burning hours of FENICS1 boards and their ids.

Returns :

FenDataFrame (pandas.DataFrame) :

Multilevel columns FenDataFrame.

Example :

 #Directories definition
 mainDataDir = "/AtlasDisk/user/FENICS/"
 burnTablePath = "/users/divers/atlas/sisaid/home2/data/burns.csv"

 #FENICS1 data collection
 FENICS = FATL.fen1_data_extractor(mainDataDir, burnTablePath)
 FENICS.to_json('/AtlasDisk/home2/sisaid/data/FENICS_data.json')

resulting FenDataFrame :

BoardFastResultSlowResult
nameversionidcodeburnings...NoiseLGNoiseLGIG1NoiseLGIG2NoiseLGIG6...Gain0Gain1Gain2Gain3Gain4Gain5

   

FENICS2 data explorer

FenStruct.fen2_data_extractor

func FenStruct.fen2_data_extractor(mainDataDir) source

Function that crawl a given directory for FENICS2 boards test data. It requires an internet connexion to CERN cloud with a tunnel, and database credential to gather burning times and failure status from the mySQL FENICS2 database. The credentials are saved in the OS environnement for the session.

It returns the data as FenDataFrame.

Important We recommend using this function to gather FENICS1 data, then save it as a json file then use FenLoad.FenicsData_read(path) to read it back.

Important There is no updating function to update FENICS DataFrames with new tests, a new scan of all the files with this function is needed.

Important Be sure that you are connected to the CERN servers with a tunnel on an external terminal. paramiko seem to connect with SSH, but cannot make a tunnel, this block the automatisation of the process.

parameters :

mainDataDir (string) :

path to the main directory to crawl for FENICS data.

Returns :

FenDataFrame (pandas.DataFrame) :

Multilevel columns FenDataFrame.

Example :

 #Directories definition
 mainDataDir = "/AtlasDisk/user/tilefen/FENICS2/"

 #FENICS1 data collection
 FENICS2 = FATL.fen2_data_extractor(mainDataDir)
 FENICS2.to_json('/AtlasDisk/home2/sisaid/data/FENICS2_data.json')

resulting FenDataFrame :

BoardFastResultSlowResult
nameversionidcodeburnings...NoiseLGNoiseLGIG1NoiseLGIG2NoiseLGIG6...Gain0Gain1Gain2Gain3Gain4Gain5

   

FENICS data parser

FenStruct.data_parser

func FenStruct.data_parser(mainDataDir,path,fen_ver,burnsTable=None,mySQL=None) source

A multipurpose parsing function for both FENICS1 and FENICS2 data folders.

parameters :

mainDataDir (string) :

path to the main directory to crawl for FENICS data.

path (string) :

relative path of the parsed folder to the main directory.

fen_vers (integer) :

version number of the parsed FENICS (1 or 2).

burnsTable (pandas.DataFrame,optional) , default=None :

(Only for FENICS1) burnTable dataframe object from the csv file.

mySQL (mysql.cursor,optional) , default=None :

(Only for FENICS2) mySQL cursor of mysql.connector after connection to the mysql database.

Returns :

parsedFolderData (dictionary) :

dictionary of the data parsed inside the folder.

Example :

 for i,FENICS_id_prefix in enumerate(FENICS_to_analyse) :
     loadingBar(i,len(FENICS_to_analyse),prefix=f"parsing {FENICS_id_prefix} data...")
     filtredFolder = [string for string in os.listdir(mainDataDir) if string.startswith(FENICS_id_prefix) or string == FENICS_id_prefix]
         for subfolder in filtredFolder :
             for subsubfolder in os.listdir(mainDataDir+subfolder) :
                 for fileName in os.listdir(mainDataDir+subfolder+"/"+subsubfolder) :
                     if fileName.endswith("SlowResult.json") and subsubfolder.startswith("20"):
                     path = subfolder + "/" +subsubfolder + "/"
                     FENICS = FENICS.append(data_parser(mainDataDir,path,1,burnsTable),ignore_index=True)
     print(f"Parsing done with success.", end='\r')    
     return FENICS