This article serves as an introductory overview of MS Fabric and as a resource for a basic evaluation of this tool. It also functions as a hub for additional articles, where individual concepts and topics are covered in more detail. The target audience consists of individuals considering trying, learning or implementing Fabric in their organization. Microsoft Fabric is a modern SaaS analytics platform that unifies data processing from acquisition/ingestion to reporting and analytics in Power BI. The goal of Fabric is to eliminate the complexity and fragmentation of data tools – offering a single platform, a unified data storage, and a consistent way of working across roles. The architecture of Fabric enables all of this.
How Microsoft Fabric Covers the Entire Data Lifecycle – Key Artifacts
Below is a simple diagram illustrating the basic structure of Fabric in terms of architecture and coverage of various stages of Data Engineering and reporting. It shows that Fabric can cover the entire data process within a single platform – without the need to copy data between different systems using different tools.
[Data Sources]
↓
[Data Factory] (ETL/ELT - Ingestion & Orchestration)
↓
[OneLake] (Central Data Storage)
↓
[Lakehouse | Data Warehouse] (Data Engineering / SQL Analytics)
↓
[Semantic Model] (Business Logic, DAX)
↓
[Power BI] (Reports & Dashboards)
--------------------------------------------
Governance | Security | Capacity | Lineage
--------------------------------------------
Fabric, of course, also provides and continuously improves other important aspects of data engineering, such as Data governance, Security (permissions, roles, row-level security, etc.), scalability through various pricing plans, and lineage (dependencies). These aspects span across all the artifacts mentioned above and form the foundation for secure and stable operations.
Basic Concepts and Information About Microsoft Fabric for Beginners
Microsoft Fabric
Microsoft Fabric is a unified analytics platform that provides, within a single platform, tools (artifacts) for data engineering, data science, data warehousing, real-time analytics, and Power BI reporting in one environment. Users can work with a single dataset across different roles. On this website, we have a dedicated category for Fabric – Microsoft Fabric articles.
Fabric Pricing and Capacity (F-SKU)
Fabric uses a capacity-based model (F-SKU), where computational power is shared across services. Choosing the appropriate capacity is important for optimal performance and cost management. Detailed information on capacity tiers, their descriptions, cost implications, and pricing in general can be found in the article – Fabric | Fabric Pricing – Models, Tiers, Recommendations.
OneLake in Fabric – Data Storage and Processing Capabilities
OneLake 1 is one of the greatest benefits of Microsoft Fabric and forms the foundational data layer of the platform. It is a unified central data storage, which can be compared to “OneDrive for data”. From an architectural perspective, all persisted data in Fabric is physically stored in OneLake. Above this layer, Fabric services (artifacts) such as Lakehouse, Data Warehouse, or Power BI provide different ways to work with the same data.
OneLake is thus a central data layer, abstracted from the user (we do not see it, but it exists in the background). It does not determine how we work with data but where the data is stored. How we work with data is defined by the artifacts and services of Fabric:
- Lakehouse is a service/artifact based on the Delta Lake format, which allows storing structured and semi-structured data with ACID transaction support, ensuring data quality and integrity. It often serves as the first Bronze layer for data acquisition in a medallion architecture but can also serve as Silver and Gold layers.
- Data Warehouse is an artifact providing an SQL-oriented view and is used as Silver/Gold layer.
- Power BI reports can read data from OneLake and its artifacts directly via Direct Lake mode, without the need to copy the data.
This approach ensures that:
- data is not redundantly stored across tools,
- governance and security are consistent,
- and the entire data ecosystem remains transparent and scalable.
Articles on various Fabric artifacts can be found below:
- Fabric | dbt – Creating a Fabric Lakehouse/Data Warehouse and Configuration
- Fabric | dbt – Architecture and Role of dbt in Medallion Architecture
- Fabric – ADLS Gen2 and Parquet – Storage Setup and Bronze Data Format
- Fabric | dbt – Shortcuts Linking ADLS Gen2 with Fabric Lakehouse
- Fabric | dbt – How I Model Dimensional Gold Tables (SCD2) in Data Projects?
- Fabric | dbt – Slowly Changing Dimension (SCD 2) – Snapshots and Check Strategies in dbt with Example
- Fabric | dbt – Configuration of profiles.yml for SPN Authentication to SQL Endpoint
Fabric Data Factory (Ingestion & Orchestration)
Data Factory in Fabric is used for data ingestion, scheduling, and orchestration. It is very similar to Azure Data Factory 2, but unlike Fabric, it is billed separately. Fabric Data Factory is included in the Fabric capacity. It supports both low-code approaches and more complex integration scenarios (Python, Spark, etc.). It is a key functionality for initially loading external data into Fabric artifacts (Lakehouse/Data Warehouse).
Key components include:
- Connectors to various data sources (Data Sources) – Fabric can work with different types of data sources, such as:
- relational databases
- various cloud services (Azure and others)
- files (CSV, Parquet)
- streaming and event data
- Pipelines (data flows) – Pipelines define the flow of data and dependencies between individual steps of the data process. They are used for automating ingestion, transformation, and publication of data. It is a low-code alternative to drag-and-drop data flow development.
- Notebooks – Notebooks are artifacts that contain code (e.g., Python, Spark, SQL, etc.). They provide a flexible way to work with data and can be executed independently or within pipelines.
Articles focused on Data Factory:
- Fabric | Getting Started with Data Factory, Pipelines, and Connectors
- Bulk Table Import in Microsoft Fabric using For Each Container and JSON Config File
- Fabric – Bronze: Data Acquisition into Delta Tables via Pipeline (notebook)
- Fabric – Pipeline and Key Vault for Secure Transfer of Secrets (Risk of SecureString Compromise)
- Fabric – Azure Service Principal (SPN) and RBAC for dbt in Entra ID
- Fabric | dbt – Docker dbt Container and Azure Container Apps (CI/CD)
Semantic Model and Power BI
Power BI is a tool for visualization and reporting. While it is most often used outside Fabric, its usage gains additional value within Fabric. It can read data directly from OneLake or artifacts such as Lakehouse and Data Warehouse via Direct Lake mode. This eliminates the need to copy data between storage systems, as is typical in traditional data platforms. This ensures data consistency and up-to-date reporting. Users can quickly create interactive dashboards and reports directly over the central data layer. However, it is fair to note that this “online/direct” approach introduces additional resource demands and may require increased Fabric capacity (higher pricing tier), and is more suitable for clients with larger budgets. Fabric can also be used with Power BI in a traditional, lower-cost way.
The semantic model in Power BI is a layer that defines logic over data – relationships, hierarchies, and calculations. For advanced calculations, the DAX (Data Analysis Expressions) language is used, allowing the creation of complex metrics, aggregations, and time-based analyses, providing flexibility to analysts. A well-designed semantic model, combined with DAX, enables efficient transformation of raw data into meaningful business insights without altering the source data.
We have a dedicated category for Power BI and related tools on the website – Power BI articles
Conclusion and the Greatest Strength of Microsoft Fabric
That concludes the introductory overview. Long story short – Microsoft Fabric unifies traditionally separate parts of data architecture into a single platform. This enables easier data management, architectural transparency, scalability, cost optimization (low CAPEX), and the backing of a strong vendor with a large community around Fabric.
Reference
- Microsoft, OneLake, the OneDrive for data [online]. [cited 2026-01-18]. Available from: https://learn.microsoft.com/en-us/fabric/onelake/onelake-overview
- Microsoft, Azure Data Factory [online]. [cited 2026-01-18]. Available from: https://azure.microsoft.com/en-us/products/data-factory