More and more car manufacturers and suppliers require solutions for enterprise-wide, multi-side test data management. Against this background, Peak Solution has investigated different ways to implement a global access to test data based on openMDM® 4.
During the investigation, Peak ODS Server was used for data storage in “mixed mode”. This means metadata (= test and measurement descriptions) is stored in an Oracle database, while mass data is stored in binary files (ASAM MDF) on a file share. The database holds in the course of this a pointer to the external storage location.
The specialists of Peak Solution were investigating the following approaches for distributing the different system components:
- Centralized SystemThe first approach was to have both ODS server and database running centrally at one location, while the openMDM® 4 clients run decentralized on user computers in each location. The ODS server has access to the file shares of all locations and is able to read and write local and remote mass data.
In this approach the ODS requests from the openMDM® 4 clients are sent over remote connections between the different locations. Measurements of the respond time for some key functions of openMDM® 4 (e.g. import of meta data, loading meta data of a test step and searching for a test step by one of its attributes) have shown that – depending on the bandwidth of the connection – a remote client connecting to a central system is 3 to6 times slower than a local client. Remark: One can avoid this performance loss by using a central openMDM® 4 client over a Terminal Server connection.
- Distributed ODS serversInstead of a central ODS server, in the next approach an ODS server has been started at any location. All ODS servers access the same central database.
The ODS request from the openMDM® 4 clients are now local requests with small latency and large bandwidth, but the database must be accessed over a remote connection. By that, the system was once again slower by a factor of 2. This can be explained by the fact that one call to the ODS interface often results in multiple SQL calls to the database. Latency adds up to each individual call and as more calls have to be issued, latency has a much bigger impact.
- Synchronizes SystemIn the third approach, an Oracle database was operated at each location. Accordingly both ODS requests from the openMDM® 4 clients and database access from the ODS servers take place on a local connection with small latencies.
In order to have all servers on the same stock of met data, it is necessary to synchronize the databases. This means, the Oracle databases must propagate changes made in one location to the other location. From the ODS servers point of view this approach is very similar to the second approach with one central database, but without the additional latency. Unfortunately, setting up the ODS data replication is a very complex task with a multitude of configuration options and topology decisions.
As mass data files get large, it is not feasible for the ODS server to read a remote file synchronously upon client request. The better way is to provide a component responsible for transferring files between the different locations. As ODS does not distinguish between local and remote shares, the logic for triggering the file transfer component has to be implemented in openMDM®. The implementation will be described later in another post. After the transfer is complete, a special importer will update the file reference within ODS and the file will be locally available to the clients at the requested location. This file transfer component can be combined with all of the above mentioned approaches.
Altogether, it can be said that none of the above-mentioned approaches is optimal regarding the performance. Currently, however, approach 1 in combination with a file transfer component seems to be the most appropriate.
Due to its new, distributed architecture, in the future openMDM® 5 will offer better capabilities for the global management of test data.