The decision to choose a particular file format is based on the following factors-
- Schema evolution to add, alter and rename fields.
- Usage pattern like accessing 5 columns out of 50 columns vs accessing most of the columns.
- Splittability to be processed in parallel.
- Read/Write/Transfer performance vs block compression saving storage space
File Formats that can be used with Hadoop - CSV, JSON, Columnar, Sequence files, AVRO, and Parquet file.
CSV Files
CSV files are an ideal fit for exchanging data between Hadoop and external systems. It is advisable not to use header and footer lines when using CSV files.
JSON Files
Every JSON File has its own record. JSON stores both data and schema together in a record and also enables complete schema evolution and suitability. However, JSON files do not support block level compression.
Avro files
This kind of file format is best suited for long-term storage with Schema. Avro files store metadata with data and also let you specify an independent schema for reading the files.
Parquet Files
A columnar file format that supports block-level compression and is optimized for query performance as it allows selection of 10 or fewer columns from 50+ columns records.