Data Modeling Overview
Data Modeling is the process of designing the data objects of an enterprise (or for a specific project on a smaller scale) based on the business requirements. The process covers many aspects of the design: relationships between different entities, constraints to enforce business rules and/or validation rules, physical storage, security, ease of use, etc.
Data Modeling is a much more involved exercise than putting some tables and columns together to support an application or enterprise. The goal is to design the data objects, their relationships, constraints, etc., but they should meet several requirements such as the ones below.
- Functional Requirements
- Performance Requirements
- Data Storage Requirements
- Data Integration/Flow Requirements
- Data Integrity
- Data Security
- Flexibility to change
Typical deliverables include:
- Conceptual Model
- Logical Model
- Physical Model
- Naming Conventions
- Data Dictionary (detailed technical metadata in some cases)
- DDL Scripts
A Data Model is one of the critical deliverables in any IT project as both the technical and business audience depend on it. A very well-designed data model can be used as a tool for communication between business and technical audiences.
Here are some DOs and DON’Ts from Data Model Delivery Process standpoint.
- The model should meet all the requirements. You should understand the requirements thoroughly (remember: it is not just about putting some tables and columns together).
- Use a Traceability Matrix to ensure the Data Model covers all requirements and specifications.
- Use a Traceability Matrix to map Functional Specifications, Conceptual Model, Logical Model, and Physical Model.
- Designing the pattern and the target data model depend on whether you are building the model for an operational/transactional application or an analytical application.
- Review the Conceptual and Logical model with the business audience along with (at least) a high-level system context diagram. Ensure the top-level models are reviewed before proceeding with detailed modeling exercise.
- The conceptual model should be technology-independent.
- Always create a Data Dictionary. This helps the technical and non-technical users understand the model and data better. Ensure your Data Definition is detailed and has contextual information. The Reference and Master Data provide context to the transactional data.
- Have your Data Modeling team follow proper naming conventions. This helps with consistency.
- Data Model should always adhere to Enterprise Data Architecture principles, guidelines, and standards.
- Remember the tradeoffs (performance, flexibility, scalability, maintainability, concurrency, access, integrity, data redundancy, etc.).
- Take the performance and other non-functional requirements into consideration when creating the physical model for a specific database platform.
- Understand the pros and cons of different types of indexes before implementing them.
- Normalization/denormalization decisions should be based on the application requirements, not on theory.
- Organize a big data model based on Subject Areas to help manage the Data Model.
- On a Data Migration/Technology Migration project, understand the legacy data by Data Profiling before building a new (target) data model.
- Reverse engineer the model if required from existing tables.
- Implement common validations and integrity checks at the database level.
- Highly recommended to include Audit Log columns in all tables.
- Make full use of the features provided by the Data Modeling tool (Design Layers, Macros, Version Control, Automation, Reverse Engineering, Data Lineage, Model Validation, Metadata Integration, etc.).
- Use checklists to review the Data Model or releasing the Data Model for review.
- Remember that consistency is the key (no matter what conventions and standards are used).
- Don’t create and finalize data models based on the User Interface requirements/mock screens (data modeling is not just about creating the data elements to store in the database. We need to consider other business rules, validation rules, data flow, data integration, security requirements, etc.).
- Don’t take a cookie-cutter approach. The data model should be built to meet the functional and nonfunctional requirements of a specific application.
- Word of caution about implementing Domain-specific Reference Models as-is. The Domain-specific models usually have an extensive and ideal list of entities and attributes to meet all possible requirements. Always customize these models to your specific functional and non-requirements.
If you have any questions, please engage with us via the comment section below or reach out to us at firstname.lastname@example.org.