ORE DB
ORE DB is
- a collection of sql Data Definition Language (DDL) scripts to create a Database for storing ORE Parameters, trade data and marketdata/fixingdata.
- perl scripts to transform xml configuration files from given folders (if no arguments are given then the examples are transformed) and marketdata/fixingdata into sql Data Manipulation Language (DML) scripts.
- the perl scripts and the (generated) SQL DDL/DML scripts are executed with windows cmd-shell scripts.
- currently MS SQL Server and MYSQL are supported (= were tested). The proof-of-concept database extraction/ORE running is only possible with MS SQL Server as it supports XML-creation for SQL Queries.
SQL DDL scripts
Tables
The main rationale was to
a) follow the structure set up by the xml schemata
b) have relational intgrity where possible
Following sql scripts define the tables:
- MdatTables.sql: all marketdata related tables (marketdata, fixingdata and covariancedata)
- NettingTables.sql: all tables for nettingset definitions
- PortfolioTables.sql: all tables for the portfolio definitions
- ResultsTables.sql: all tables for ORE results
- TypesTables.sql: all tables for the referenced types
- ConfigTables.sql: the OreConfiguration table holding all the XML configs for calculating results
For more details see the SchemaSpy generated documentation
XMLselectionViews
These are MS SQL Server XML query Views to extract the data from the Portfolio and nettingset tables into usable xml text. The logic of this extraction is detailed in https://rkapl123.github.io/SQLServerXML.html
SQL DML script generation
DML scripts are generated by perl scripts into the Data folder.
ORE configuration parameters
The cmd shell script runConvertXML2SQL.cmd (calls the perl script convertXML2SQL.pl) is used for the extraction of XML from ORE configuration parameters and trade data, converting those into DML. This can be either done for the provided standard examples (as delivered with ORE) or for other folders. Configuration of the folders to be taken is either done in convertXML2SQL.pl (default values)
# set this to your ORE Root folder
my $oreRoot = ($ARGV[0] ? $ARGV[0] : '../../Engine');
# set this to the folder where configuration files are located
my $configDir = ($ARGV[1] ? $ARGV[1] : "$oreRoot/Examples/Input");
# set this to the xsd schema definitions (should be the ones from the ORE Engine)
my $xsdDir = ($ARGV[2] ? $ARGV[2] : "$oreRoot/xsd");
# set this to your analysis input folder (to translate portfolio, ore parameters, netting sets and simulation/stresstest/sensitivity parameters)
my $inputDir = ($ARGV[3] ? $ARGV[3] : "$oreRoot/Examples/Example_1/Input");
# leave empty to process standard examples
$inputDir = "" if !$ARGV[0];
or using the provided arguments, where
- ARG1 .. oreRoot - ORE Root folder
- ARG2 .. configDir - configuration files folder
- ARG3 .. xsdDir - xsd schema definitions folder
- ARG4 .. inputDir - analysis input folder
The convertXML2SQL.pl script requires following packages to be installed: XML::LibXML; Scalar::Util
To produce correct running sql scripts (due to relational integrity), it is important that all parts in the parametrization files are ordered following their dependency. As an example: A yield curve (e.g. JPY3M) depending on another yield curve (ProjectionCurveLong: JPY6M) needs to be put AFTER the depending yield curve, so JPY3M should be after JPY6M.
Market-, Covariance- and Fixing data
The cmd shell script runConvertMarketdata2SQL.cmd (calls the perl script convertMarketdata2SQL.pl) is used for the extraction of text from ORE marketdata/covariancedata/fixingdata, converting those into DML. This can be either done for the provided standard examples (as delivered with ORE) or for other files. Configuration of the files to be taken is either done in convertXML2SQL.pl (default values)
# set this to your ORE Root folder
my $oreRoot = ($ARGV[0] ? $ARGV[0] : '../../Engine');
# set this to the path/filename of the marketdata/fixingdata/covariance files
my @marketdataFiles = (($ARGV[1] ? $ARGV[1] : "$oreRoot/Examples/Input/market_20160205.txt"),"marketdata_missing.txt");
my $fixingdataFile = ($ARGV[2] ? $ARGV[2] : "$oreRoot/Examples/Input/fixings_20160205.txt");
my $covarianceFile = ($ARGV[3] ? $ARGV[3] : "$oreRoot/Examples/Example_15/Input/covariance.csv");
or using the provided arguments, where
- ARG1 .. ORE Root folder
- ARG2 .. marketdata file
- ARG3 .. fixingdata file
- ARG4 .. covariance file
Because the relational integrity of quotes in the curve configuration is relying on Quote keys available in the MarketData Definitions table, another marketdata file “marketdata_missing.txt” is needed, which inserts the missing Quote keys.
Database creation and filling
The DDL and DML scripts are executed with runOREDBScripts.cmd, which runs each script using execMSSQL.cmd, execMYSQL.cmd or any provided execYOURDB.cmd, being a wrapper for different databases. This script takes two optional arguments, the first being the mentioned exec cmd script, containing the invocation of the Database specific command of the passed input script (%1). The second argument is the sql script creating the ORE Database, which also might be very specific to the local situation. Currently only MS SQL Server and MYSQL have been tested, for easy starting runOREDBScripts.cmd for mysql, there is a script runOREDBScriptsMYSQL.cmd that calls runOREDBScripts.cmd with execMYSQL.cmd (runOREDBScripts.cmd starts by default with execMSSQL.cmd and oreDBCreateMSSQL.sql). The output of runOREDBScripts.cmd is logged to SQL.log
Database extraction and running ORE
As a proof of the concept, the script runXMLOutput.cmd fetches the Database-stored data for example 2 (Sensitivityanalysis and Stresstest from example 15) and writes it into respective xml files in folder OREDB. Subsequently ORE is started with the ore.xml parametrization and the extracted files.
The final OREControl suite would pass the XML strings in-memory to a modified ORE-App (using SWIG) and retrieve the results in-memory, too.