watermark picture

How to design and setup a Lakehouse architecture using Azure Synapse/Databricks

During this course we'll guide you trough the most important concepts within the lakehouse architecture and explore delta lake and Apache Spark.

The lakehouse architecture is quickly becoming the new industry standard for data, analytics and AI. It proposes a solution to the most important challenges the established data architectures, Data warehouse and Data lake, are facing.

In this training 

we'll guide you trough the most important concepts within the lakehouse architecture and explore delta lake and Apache Spark.

After this training 

you will have the necessary insights to design and setup a Lakehouse architecture using Azure Synapse or Databricks.

This training is for 

data architects, engineers and developers.

Related cases

Related blogs

Taking responsibility: Governing AI to generate value while managing risks

AI applications have taken off across industries. From chatbots generating creative content to AI-driven automation in finance and customer service, businesses are seeing real impact. But as AI adoption grows, so do the challenges—bias in decision-making, transparency concerns, and even security risks like deepfake fraud. How can organizations ensure AI delivers value while staying responsible?

Read More

When Data Governance Meets Data Engineering: Optimizing Microsoft Purview with SparkLin for Automated Lineage

Data lineage is more than a technical feature; it is a cornerstone for understanding how data flows, transforms and integrates across systems.

Read More

Hive vs Iceberg Tables in AWS Athena

Choosing the Best Option for Your Data Pipelines with dbt

Read More

How to build a cost-effective and robust streaming data pipeline

Envision a situation where you're tasked with managing clickstream data received via Snowplow. In this blog post, we'll guide you through our solution, step by step.

Read More