Delta Lake on
Databricks Demo

With Delta Lake on Databricks, you can build a lakehouse architecture that combines the best parts of data lakes and warehouses, on a simple and open platform for storing and managing all of your data, that supports all of your analytics and AI use cases.

In this demo, we cover the main features of Delta Lake, including unified batch and streaming data processing, schema enforcement and evolution, time travel, and support for UPDATEs/MERGEs/DELETEs, as well as touching on some of the performance enhancements available with Delta Lake on Databricks.

Dive Deeper Into Delta Lake

Building-Reliable-Data-Lakes-at-Scale
Get started with Delta Lake tech talks
Download the Delta Lake cheat sheet

Video Transcript

Delta Lake Demo: Introduction

The lakehouse is a simple and open data platform for storing and managing all of your data, that supports all of your analytics and AI use cases. Delta Lake provides the open, reliable, performant, and secure foundation for the lakehouse.

It’s an open source data format and transactional data management system, based on Parquet, that makes your data lake reliable by implementing ACID transactions on top of cloud object storage. Delta Lake tables unify batch and streaming data processing right out of the box. And finally, Delta Lake is designed to be 100% compatible with Apache Spark. So it’s easy to convert your existing data pipelines to begin using Delta Lake with minimal changes to your code.

 

Ready to get started?

Try Databricks for free