watermark picture

How to design and setup a Lakehouse architecture using Azure Synapse/Databricks

During this course we'll guide you trough the most important concepts within the lakehouse architecture and explore delta lake and Apache Spark.

The lakehouse architecture is quickly becoming the new industry standard for data, analytics and AI. It proposes a solution to the most important challenges the established data architectures, Data warehouse and Data lake, are facing.

In this training 

we'll guide you trough the most important concepts within the lakehouse architecture and explore delta lake and Apache Spark.

After this training 

you will have the necessary insights to design and setup a Lakehouse architecture using Azure Synapse or Databricks.

This training is for 

data architects, engineers and developers.

Related cases

Related blogs

ai

Predicting Platelet Demand: Transforming Healthcare Logistics with Time Series Forecasting

Blood platelets present a unique healthcare supply chain challenge with their critical importance and extremely short shelf life. By precisely forecasting platelet demand, healthcare providers can better balance patient needs with resource optimization, resulting in reduced waste, significant cost savings, and more reliable care. This case study explores how advanced analytics transforms healthcare logistics while maximizing the impact of every blood donation.

Read More

Taking responsibility: Governing AI to generate value while managing risks

AI applications have taken off across industries. From chatbots generating creative content to AI-driven automation in finance and customer service, businesses are seeing real impact. But as AI adoption grows, so do the challenges—bias in decision-making, transparency concerns, and even security risks like deepfake fraud. How can organizations ensure AI delivers value while staying responsible?

Read More

When Data Governance Meets Data Engineering: Optimizing Microsoft Purview with SparkLin for Automated Lineage

Data lineage is more than a technical feature; it is a cornerstone for understanding how data flows, transforms and integrates across systems.

Read More

Hive vs Iceberg Tables in AWS Athena

Choosing the Best Option for Your Data Pipelines with dbt

Read More