Big Data File Formats for Data Engineers

We live in a digital era where thousands of zettabytes of data are generated at an ever-increasing pace. This brings the challenge of storing and processing all this data. So, what does that have to do with big data file formats for data engineers?

When we say “data” we don’t only refer to the data of businesses and organizations. To clarify, this data is generated from numerous sources including web applications, cell phones, social media networks, bank transactions, messages, sensors, etc.,

The data structure always differs in nature, some may be structured while some remain semi-structured or unstructured. Therefore, in order to store this data, a significantly large storage area is required that has the capability of handling various data types.

For data engineers, it is crucial to have in-depth knowledge of big data. Whether you are trying to get certified or working on your resume for the next job, knowing about big data is a must-have skill. In this article, we will learn about different big data file formats for data engineers.

What is Big Data?

The simplest definition of big data is that it’s a term to define enormous volumes of data. For instance, product, operational, and customer data. These are normally in the ranges of terabytes and petabytes.

Moreover, big data analytics can also help optimize operational and key business use cases and eradicate regulatory and compliance risks. In addition, you can also use big data to build net-new revenue streams. When talking about big data, your data sources can include:

  • E-commerce transactions,
  • Credit card and POS transactions,
  • Mobile device engagements,
  • Social media engagements,
  • IoT generated sensor readings.

As for insights, you can use big data for

  • Creating net-new revenue streams,
  • Optimization of operational and key business use cases,
  • Building new and differentiated consumer experiences,
  • Mitigation of regulatory and compliance risks.

Big Data File Formats

Big data engineers have to mainly deal with several file formats, each file format comes with its own sets of pros and cons. The responsibility of a data engineer is to choose the format that offers the best outcome for a given situation. The file formats are as follows:

  • CSV
  • JSON
  • Avro
  • Parquet
  • ORC

Now that you are familiar with the file formats, let’s take a more detailed look at each one.

1. CSV (Comma-Separated Values)

You might already be familiar with CSV files as they are the most popular way of exchanging tabular data between different systems by using only plain text. In a way, CSV can be defined as a row-based file format and each row of the file in hand is a row in your data table.

Originally, CSV file formats contain a header row that you will find quite similar to excel sheets. The header row contains the name of columns for the data confined within each one. If these are not available then your data may be considered partially structured.

One of the most notable properties of CSV files is that they can be split only when data is raw or the file is uncompressed. In addition, you can also split the CSV file if a splittable compression format is used. For instance, Izo and bzip2.

Here’s an example of CSV file formats. Consider a table as follows:

SrIDProduct_NameProduct_QuantityPrice
564321USB1009.95
565432Charger8414.99

The tabular data will be displayed in CSV format as

SrID, Product Name, Product Quantity, Price

564321, USB,100,9.95

565432, Charger, 84,14.99

CSV file format is most suitable for spreadsheet processing and human reading. You can use this format to carry out analysis and create small data sets, and POCs.

2. JSON (JavaScript Object Notation)

The second big data file format is JSON, which is presented in a partially structured format as key-value pairs. It is normal to compare JSON with XML format because both are capable of storing data in a hierarchical format.

Moreover, both formats are readable, but JSON offers smaller documents in comparison to XML. Therefore, these formats are normally used in network communication. Thanks to the rise of REST-based web services, the use of JSON has increased even further.

Most web languages essentially support JSON file format and that’s exactly why a lot of data is transmitted in this format. You can use this big data file format to exchange formats for hot data and cold data warehouses, and also represent data structures.

There are plenty of streaming packages and modules that support JSON deserialization and serialization. Furthermore, JSON documents can be stored in performance-optimized formats such as Avro or Parquet.

3. Avro

Third, on our list is Avro format – a row-based storage format for Hadoop. Mainly used as a serialization platform, Avro stores schemas in JSON format. This capability makes the data user-readable and interpretable by almost any program.

Avro stores the data in a binary format, which makes it highly compact and efficient. Essentially, it is a language-neutral data serialization system that you can process in numerous languages. For instance, C++, C, C#, Ruby, Python, and Java.

This particular big data file format has a key feature – robust support for data schemas that have the tendency to change over time. To clarify, schema evolution. You can expect Avro to handle schema changes such as added fields, modified fields, and even missing fields.

Moreover, Avro offers one of the richest data structures. For instance, you can generate a record that contains:

  • An enumerated type,
  • An array,
  • A sub-record.
big data file format

Avro is easily the best candidate for data storage in a data lake landing zone. That is mainly due to three reasons:

  • Schema evolution support,
  • Table schemas from Avro files can be conveniently retrieved by downstream systems,
  • Landing zone data is often read as a whole for supplemental processing by downstream systems.

4. Parquet

Parquet is an open-source file format for Hadoop. Essentially, it assists in achieving efficient storage and performance by storing data in nested structures in flat columnar formats.

In Parquet, each column value of the same type in records is stored together. Let’s have a closer look at this big data file format. Consider this table that consists of SrID, Product_Name, Product_Quantity, and Price. It is the exact same table as the CSV file format example.

SrIDProduct_NameProduct_QuantityPrice
564321USB1009.95
565432Charger8414.99

Now, what Parquet does is, it stores the data in row-wise storage and column-oriented storage format. Here’s what it will look like:

Row-wise storage

564321USB1009.95 5654321Charger8414.99

Column-oriented storage

564321565432USBCharger100849.9514.99

As you can guess, the column-oriented storage format seems to be comparatively more efficient. Moreover, it enhances the query performance since it does not take a lot of time to retrieve required column values.

5. ORC (Optimized Row Columnar)

Finally, at the end of the list, we have the ORC file format that provides an incredibly efficient way of storing data. The ORC format was specifically designed to overcome the previous limitations faced by other file formats.

Using this format, you can improve performance overall when Hive writes, reads, and processes your data.

The ORC file format stores a vast collection of rows in a single file and within that collection, the row data is further organized in a columnar format. Here’s a schematic diagram to give you an overview:

big data file formats

The ORC file format consists of a group of rows known as stripes and the file footer consists of auxiliary information as well. If you look at the bottom of the diagram, there’s “Postscript”. The postscript comprises compression parameters and compressed footer size.

The effective stripe size is 250 megabytes while large stripe sizes enable large, efficient HDFS reads. The stripe footer acts as a storage for the directory of stream locations and row data is utilized in table scans.

Going bottom-up, the Index data contains minimum and maximum values for each column along with the column’s row positions. Overall, ORC is a comparatively more compression-efficient data storage option.

Why Do We Need Different File Formats?

HDFS-enabled applications such as Spark and MapReduce face a huge challenge and that is the significant time taken to find relevant data in a selected location. In addition, the time taken to write that data to a different location.

When it comes to managing large datasets, this issue becomes even more complicated. That is to say, the difficulty level increases for evolving schemas and storage constraints.

Therefore, processing big data comes with its own high cost to store data. Moreover, adding the storage cost together with the cost of CPU to process data, IO, and network costs, you’re left with a money-swallowing whirlpool. In other words, larger datasets mean more expense.

Hence, big data file formats have gone through an evolution in data engineering solutions to overcome these issues among many other use cases. Choosing the appropriate file format means you get a lot more benefits than cutting down on expenses.

Advantages of Using Appropriate Big Data File Formats

Big data costs vary from technology to technology. For instance, Hadoop achieves fault tolerance by generating replicas of files. Accessing files requires CPUs and other hardware resources that add up the cost. Therefore, increased data equals increased costs.

However, you can balance the cons with the pros by using the appropriate big data file format. Here are some of the perks of using the right format:

  • Reduced costs,
  • Faster data writing,
  • Schema evolution support,
  • Advanced compression support,
  • Faster data reading from storage,
  • Parallelism support through splittable files.

Keep in mind that only some big data file formats are designed for specific use cases while some are for general use. In addition, some file formats are designed to cater to particular data characteristics.

To sum it up, there are enough options to choose from.

Comparison: Different Big Data File Formats

Compression algorithms offer advantages for specific use cases and cloud storage systems store your data as files and in different file formats.

The way you store data in the data lake has a crucial role and you must always consider compression, format, and most importantly how data is partitioned. 

Here’s a quick comparison of the three main big data file formats:

OptionsParquetORCAvro
Schema EvolutionGoodBetterBest
File CompressionBetterBestGood
Splitability SupportGoodBestGood
Row or Column ColumnColumnRow
Read or WriteReadWriteWrite

CSV file format offers ease of use, higher readability, and is one of the most used formats for data representation in tabular form. However, CSV lacks many capabilities offered by the three mentioned in the table.

JSON follows suit in terms of splitability when compressed as CSV. As for Parquet and ORC, they are also vastly used in the Hadoop ecosystem. The main application is data query. Parquet format remains the default for Spark whereas ORC is mainly used in Hive.

Finally, Avro is the only one from our table that can be used outside of the Hadoop ecosystem. For instance, Kafka.

Takeaway – Big Data File Formats for Data Engineers

There are several options for big data file formats. JSON and CSV may be the easiest to use and read. However, they lack a lot when pitted against Avro, Parquet, and ORC. 

In addition, JSON should be avoided as it has not shown promising results in performance tests regardless of being the communication standard on the internet.

Whenever you have to choose a big data file format as a data engineer, remember to consider a few things. As aforementioned, these include; human-readability, your data structure, performance, compression, compatibility, and schema evolution.

The final piece of advice – choosing the appropriate big data file format is better than choosing the “best” one. Remember, this topic will also occur during your data engineering interview.

Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments