Without data there is no information and without information, no knowledge-based decisions!
But only if the data is processed such that the relevant information is made directly accessible to the end users. Based on data management provided by Numerics, companies can optimally exploit the potential use of raw data for BI. There is quite a long way to go from raw data to a data warehouse capable for user queries: source data from different and usually very heterogeneous operational systems (i.e., legacy systems) have to be transferred into a unified repository using different techniques. Numerics designs this repository in a way to meet specific user requirements enabling queries and reporting. Hereby data can be aggregated according to different hierarchies (such as time, location and product categories). The various techniques applied by Numerics convert heterogeneous raw data into a unified user-friendly data format. In the complex process chain data quality plays a decisive role; Numerics develops and implements rules for acquisition, processing and analysis of data.

The basis for any kind of analyses is relational database models, storing information in a logical, and efficient structure. Modular approaches save time, resources and costs. In the BI context, a data warehouse is characterized as joining together all transaction data – specifically structured for querying and reporting capability and available for all users throughout the enterprise. Specific ‘datamarts’ can be defined within a data warehouse (DW) in order to meet the different business questions. The way data is stored greatly depends on the specific needs of the user – for example, as ‘normalized’ relational tables or designed multi-dimensionally as cubes.

For clinical trials Numerics uses web-based tools that are available on an HTTPS server around the clock for our customers. The software used meets the requirements of directive 21 CFR Part 11. Customized input masks can be designed as required. The advantages of EDC (Electronic Data Capture) are that data records are checked already along the collection process and furthermore, ad hoc evaluations of patients are feasible at any time.

ETL stands for extraction, transformation, and loading of data. ETL covers the process of data transfer from the source data of operational systems (i.e., legacy systems) to a uniform metadata repository. With consistent and transparent data an overall view of the entire enterprise can be provided. In order to provide the transformed data to the user during the day, ETL processes run mostly at night hence performance is a key criterion for success.

ETL systems provide high standards of quality, stability and scalability. This includes technologies for data quality, data security, and reliability in view of the processing chain of the entire IT system.

Data must be condensed, aggregated, and enriched in order to allow answering complex questions. For example: In order to derive the number of “physician visits per month” or the number of “sales per week” the corresponding records of the operational database have to be processed first. Often, data from heterogeneous databases must be transformed into a uniform format. If repetitive analyses involving various selection parameters have to be run, Numerics designs parameterize-able processes using macro programs. Numerics has many years of experience with SAS programming and can support you competently in all areas of these tasks.