By clicking “Accept”, you agree to the storing of cookies on your device to enhance site navigation, analyze site usage, and assist in our marketing efforts. View our Privacy Policy for more information.
Aug 25, 2022
Data Culture

Decoding the Modern Data Stack — Data Integration

Post by
Benedetta Cittadin
&

The modern data stack is a collection of cloud-native applications that serve as the foundation for an enterprise data infrastructure, becoming the de facto standard for organizations to deal with their data. Like an industrial value chain, the modern data stack comprises different components, all related to each other. In a previous blog, I went through a high-level overview of the modern data stack, but each component deserves detailed attention. Let’s start from the beginning: data integration. 

In this blog, I’ll sum up the main aspects of data integration by answering the following questions:

  • What is data integration?
  • What are examples of data integration?
  • What are the benefits of integrating data into a unified destination?
  • How do you integrate data? ETL vs. ELT
  • What are the different storage options for data integration?
  • Why data observability is essential from the data integration stage
  • How are data integration and data observability linked together?

What is data integration?

As the quantity of data generated and collected by businesses continues to grow exponentially, companies need to integrate this data to make sense of it. Effective data integration means extracting data from different sources - like cloud-based applications, production databases, real-time data feeds, and more - and creating a central, unified view of your information. 

Data integration techniques: ETL vs. ELT

I have already discussed the differences between these two techniques in this blog. But let’s go through them once again. ETL (extract, transform, load) and ELT (extract, load, transform) are the two most popular data integration techniques used to transfer data into a unified destination, like a database, a data warehouse, a data lake, or a data warehouse. 

As mentioned in my previous blog, the ETL method was preferred due to the high cost of on-premise storage solutions. Loading untransformed data to these warehouses was too costly, leading organizations to make the desired data transformations as soon as the data was extracted. With the rise of cloud computing, however, storage became much cheaper. Consequently, removing the need to perform changes before loading the data into the warehouse.

Let’s go through the two different techniques in more detail.

Data integration with ETL

ETL is a data integration technique in which data is extracted from a source system, transformed, and loaded into a unified destination. Before the rise of cloud data warehouses, ETL was the preferred method. However, ETL was quickly abandoned and replaced by ELT due to its multiple disadvantages. Disadvantages of ETL can be the following:

  • As only transformed data is stored in the destination, organizations need to know how they are going to use the data beforehand to make the appropriate modifications. Modifying these requirements can be very time-consuming and costly.
  • Volume and demand for data have been dramatically growing in the past few years. Unfortunately, many ETL processes fail to scale to appropriately match these heavy loads. When dealing with large amounts of data, some common mistakes are loading irrelevant data and bottlenecks due to insufficient CPU and serial data processing instead of parallel data processing.
  • In ETL, pipelines extract and transform data all at once, breaking as soon as schema changes in the data source. For instance, when a field is removed. This compromises the code needed to transform the data, and it has to be re-written by data engineers.
  • Data security can become a concern when loading large amounts of raw data into your storage. Managing user access to the raw data stored in the data warehouse is crucial to dealing with this challenge.
  • Since data is stored with minimal processing, there may be additional steps to ensure that you comply with the leading security protocols, including GDPR and HIPAA.

Data integration with ELT

ELT is a data integration method in which data is extracted from a source system, loaded into a destination system, and then transformed according to the organization’s needs. The core characteristic of this method is that data remains unmodified throughout the extraction and loading phases, making the original, raw data always accessible to teams. The key advantage of this approach is that, as needs and uses for data change within organizations, the data can be re-transformed accordingly by using the ever-accessible raw data that is available in the storage system. Besides avoiding organizations the difficulties of knowing the requirements of the data before it is loaded into the warehouse, ELT also gives data analysis more autonomy from the data engineers as it is not necessary for them to modify pipelines to re-transform data.

What are the different storage options for data integration?

Today, there are multiple data storage options available. One of the challenges of data integration is to assess what the best choice for your organization is, keeping in mind that each technology is different and meant to address different kinds of data storage and processing requirements. Let’s go through the three most common data storage options in detail.

Data warehouse. A data warehouse can be defined as a repository of data designed for storing and analyzing large amounts of information coming from different sources. On top of that, data warehouses are often used for business intelligence and data analytics. This is because data warehouses are optimized for running analytics across sizeable structured data sets. This is the best option if your organization doesn’t need to manage unstructured data. 

Data lake. A data lake is a system that is used to store vast amounts of unstructured data, where the purpose of the data is not defined. This solution is used to store vast amounts of data at a lower cost. This, however, comes at the expense of the possibility of doing efficient and quick analyses. So, if your goal is to store large amounts of unstructured data for which purpose has not yet been defined, then data integration into a data lake may be the best option for you. 

Data lakehouse. A data lakehouse is a new storage architecture that offers the combination of the data management capabilities of data warehouses and the flexibility and lower cost of data lakes. This would be the best option if your organization would benefit from both capabilities of data warehouses and data lakes. 

How to choose the best data integration tool for your organization?

There are several different data integration tools available in the market. Choosing the best one for your organization can be complicated, and it can depend on many different variables. Let’s briefly go through some of the things you need to consider when choosing a data integration tool:

Data connector quality. 

  • Open-source vs. proprietary: choosing between the two can be complicated. Proprietary refers to software that the company owns. Open-source, on the other hand, refers to software that can be accessed and modified by anyone. This means that open-source connectors are for a larger range of data sources, but proprietary connectors are usually of better quality. 
  • Standardized schemas and normalization: data from the API is not standardized. Normalization minimizes duplicate data, simplifies queries, and avoids data modification issues. In contrast, standardization makes sure that the data is internally consistent.

Support for sources and destination. It is important to ensure that the data integration of your choice supports the sources and targets that you work with. 

Automation. In general, good data tools should always try to remove as much manual intervention as possible. Data integration tools should have automated schema migration, automatic adjustment to API, and ingest, replicate and deliver data with speed and efficiency.

Security and compliance. As mentioned above, it is essential to understand how the tool of your choice complies with current regulations. 

Use cases. What do you need to get done with your data integration tool? Data integration tools can be used for data creation, data pipelines, virtualization, cleansing, discovery, real-time, and more. So, understanding what you need is essential to choosing the right data integration tool. 

Ease of use. As always, any tool you include in your data stack should be easy to learn and intuitive. 

Why data observability is essential from the data integration stage

Moving data from source to destination is a crucial part of a data engineer's job, allowing companies to gain a set of benefits, such as: creating a single source of truth in the organization, reducing security and operational risks, getting the ability to transform the data with the same tools and methodologies, etc. However, data quality issues can and often materialize in this crucial stage. When you pull data from an external system, teams are blind to what this external system is doing. Consequently, you don’t control when these external systems change APIs or schema. And, as a consumer of these different sources, you have to know when something is changing. You need this type of visibility because data integration is the starting point. And if you don’t notice that there are some anomalies fast enough, the bad data will end up with the rest of your data. So, it’s essential to ensure data reliability throughout the full data transit process from the very beginning through data observability. 

To take a preventative approach and detect issues at the source before the data goes into consumption, you need data lineage. In short, data arrives on one side and gets out on the other. So, if you detect something wrong at any stage during the data lifecycle, you want to know where the problem is coming from. This is how you ensure you can quickly react whenever you see a problem. 

Let’s go through a practical example. A use case that is very common in data integration is trying to get a holistic view of your customer. Every touchpoint organizations have with their customers is spread across more than ten different SaaS services - product databases, payment databases, payment processors, and more. You need to centralize everything. But since you have many sources, the likelihood of something going wrong is high. And if you don’t have the proper data observability and lineage, you will see anomalies without knowing where they are coming from. So, to solve these issues, data engineers need to spend a lot of time investigating where the problem comes from, and only then can they finally troubleshoot. This can become a massive waste of time for data engineers, who will be busy debugging most of their days rather than working on value-creating activities. So, access to good data lineage is vital because it can quickly allow you to go upstream and check where something went wrong, enabling teams to have immediate recovery. 

Conclusion 

Data integration tools empower businesses to reach their full potential by breaking data silos. Through data integration, companies gain the ability to unite their most crucial information in one place, consequently boosting efficiency and productivity. Data integration is essential for organizations to create a single source of truth, reduce security and operational risk, and maintain consistency in their data. While integrating data from multiple disparate sources can empower organizations to make winning business decisions, it can also cause poor data quality. If data issues that start at the data integration phase are not quickly resolved, they will make all of the organization’s data unreliable. For this reason, having a good data observability practice already at this stage of the modern data stack is crucial. 

The Modern Data Stack is not complete without an overseeing observability layer to monitor data quality throughout the entire data journey. Do you want to learn more about Sifflet’s Full Data Stack Observability approach? Would you like to see it applied to your specific use case? Book a demo or get in touch for a two-week free trial!

Related content