Skip to the content.

ORE DB

ORE DB is

SQL DDL scripts

Tables

The main rationale was to
a) follow the structure set up by the xml schemata
b) have relational integrity where possible

Following sql scripts define the tables:

For more details see the SchemaSpy generated documentation

XMLselectionViews

These are MS SQL Server XML query Views to extract the data from the Portfolio and nettingset tables into usable xml text. The logic of this extraction is detailed in https://rkapl123.github.io/SQLServerXML.html

SQL DML script generation

DML scripts are generated by perl scripts into the Data folder.

ORE configuration parameters

The cmd shell script runConvertXML2SQL.cmd (calls the perl script convertXML2SQL.pl) is used for the extraction of XML from ORE configuration parameters and trade data, converting those into DML. This can be either done for the provided standard examples (as delivered with ORE) or for other folders. Configuration of the folders to be taken is either done in convertXML2SQL.pl (default values)

# set this to your ORE Root folder
my $oreRoot = ($ARGV[0] ? $ARGV[0] : '../../Engine');
# set this to the folder where configuration files are located
my $configDir = ($ARGV[1] ? $ARGV[1] : "$oreRoot/Examples/Input");
# set this to the xsd schema definitions (should be the ones from the ORE Engine)
my $xsdDir = ($ARGV[2] ? $ARGV[2] : "$oreRoot/xsd");
# set this to your analysis input folder (to translate portfolio, ore parameters, netting sets and simulation/stresstest/sensitivity parameters)
my $inputDir = ($ARGV[3] ? $ARGV[3] : "$oreRoot/Examples/Example_1/Input");
# leave empty to process standard examples
$inputDir = "" if !$ARGV[0];

or using the provided arguments, where

The convertXML2SQL.pl script requires following packages to be installed: XML::LibXML; Scalar::Util

To produce correct running sql scripts (due to relational integrity), it is important that all parts in the parametrization files are ordered following their dependency. As an example: A yield curve (e.g. JPY3M) depending on another yield curve (ProjectionCurveLong: JPY6M) needs to be put AFTER the depending yield curve, so JPY3M should be after JPY6M.

Market-, Covariance- and Fixing data

The cmd shell script runConvertMarketdata2SQL.cmd (calls the perl script convertMarketdata2SQL.pl) is used for the extraction of text from ORE marketdata/covariancedata/fixingdata, converting those into DML. This can be either done for the provided standard examples (as delivered with ORE) or for other files. Configuration of the files to be taken is either done in convertXML2SQL.pl (default values)

# set this to your ORE Root folder
my $oreRoot = ($ARGV[0] ? $ARGV[0] : '../../Engine');
# set this to the path/filename of the marketdata/fixingdata/covariance files
my @marketdataFiles = (($ARGV[1] ? $ARGV[1] : "$oreRoot/Examples/Input/market_20160205.txt"),"marketdata_missing.txt");
my $fixingdataFile = ($ARGV[2] ? $ARGV[2] : "$oreRoot/Examples/Input/fixings_20160205.txt");
my $covarianceFile = ($ARGV[3] ? $ARGV[3] : "$oreRoot/Examples/Example_15/Input/covariance.csv");

or using the provided arguments, where

Because the relational integrity of quotes in the curve configuration is relying on Quote keys available in the MarketData Definitions table, another marketdata file “marketdata_missing.txt” is needed, which inserts the missing Quote keys.

Database creation and filling

The DDL and DML scripts are executed with runOREDBScripts.cmd, which runs each script using execMSSQL.cmd, execMYSQL.cmd or any provided execYOURDB.cmd, being a wrapper for different databases. This script takes two optional arguments, the first being the mentioned exec cmd script, containing the invocation of the Database specific command of the passed input script (%1). The second argument is the sql script creating the ORE Database, which also might be very specific to the local situation. Currently only MS SQL Server and MYSQL have been tested, for easy starting runOREDBScripts.cmd for mysql, there is a script runOREDBScriptsMYSQL.cmd that calls runOREDBScripts.cmd with execMYSQL.cmd (runOREDBScripts.cmd starts by default with execMSSQL.cmd and oreDBCreateMSSQL.sql). The output of runOREDBScripts.cmd is logged to SQL.log

Database extraction and running ORE

As a proof of the concept, the script runXMLOutput.cmd fetches the Database-stored data for example 2 (Sensitivityanalysis and Stresstest from example 15) and writes it into respective xml files in folder OREDB. Subsequently ORE is started with the ore.xml parametrization and the extracted files.

The final OREControl suite would pass the XML strings in-memory to a modified ORE-App (using SWIG) and retrieve the results in-memory, too.

python market data fetching script MDataORE.py

To utilize the MDataORE.py script, you need to

An example might be following entries, the first being a Bloomberg Ticker for EURUSD Forwards, having an override defined to get the outright quotation instead of pips. The second is an USD Swaption and the third is an Fx Option, both retrieved using Reuters (LSEG).

If both VendorField1 and VendorField2 are defined, the average is taken of the two fields (e.g. BID/ASK), if only one is defined, then this field will be taken.

TickerId VendorName VendorTicker VendorField1 VendorField2 OverridesFieldId OverridesValue
1 BLOOMBERG EURUSD3M BGN Curncy PX_BID PX_ASK FWD_CURVE_QUOTE_FORMAT OUTRIGHT
2 REUTERS USD10Y10Y3LATM= BID NULL NULL NULL
3 REUTERS EUR1MO=R PRIMACT_1 NULL NULL NULL

After defining the Tickers/RICs, you have to configure the LSEG connection, for this you need to create a key for the “Side-by-Side Web API” using the LSEG Workspace App “APPKEY”. Follow the instructions there and paste the created key into the file refinitiv-data.config.json. For Bloomberg no further configuration is needed, only an opened terminal is required (this makes a BLP API ‘delivery point’ available on localhost).

Then configure your database connection, there are two connection strings that are available, one for production and one for test environment. The test environment is selected if the script is being executed in a path containing Test. Otherwise production is assumed and the data is written into both test and production environment. The DBconn variable defines the production database, DBconnTest defines the test database.

DBconn = 'mssql+pyodbc://someDBServerName/Marktdaten?driver=SQL+Server'
DBconnTest = 'mssql+pyodbc://someDBServerNameTest/Marktdaten?driver=SQL+Server'

You can also configure this in a separate MDataORE.config file which is read/executed initially (if it is found).

To migrate from the previous database setup you can use following migration script:

USE ORE;

CREATE TABLE MdatVendorDefinitions(
	TickerId int NOT NULL,
	VendorName varchar(10) NOT NULL,
	VendorTicker varchar(100) NULL,
	VendorField1 varchar(50) NULL,
	VendorField2 varchar(50) NULL,
	OverridesFieldId varchar(50) NULL,
	OverridesValue varchar(50) NULL
 CONSTRAINT PK_MdatVendorDefinitions PRIMARY KEY CLUSTERED 
(
	TickerId ASC
));

ALTER TABLE MdatMarketDataDefinitions DROP COLUMN VendorTicker;
ALTER TABLE MdatMarketDataDefinitions ADD TickerId int;
ALTER TABLE MdatFixingDataDefinitions DROP COLUMN VendorTicker;
ALTER TABLE MdatFixingDataDefinitions ADD TickerId int;

ALTER TABLE MdatMarketDataDefinitions
ADD CONSTRAINT FK_MdatMdatVendorDefinitionsTickerId
FOREIGN KEY (TickerId) REFERENCES MdatVendorDefinitions(TickerId);

ALTER TABLE MdatFixingDataDefinitions
ADD CONSTRAINT FK_MdatFdatVendorDefinitionsTickerId
FOREIGN KEY (TickerId) REFERENCES MdatVendorDefinitions(TickerId);