Tabular Datasets

Warning

The pyarrow.dataset module is experimental (specifically the classes), and a stable API is not yet guaranteed.

The pyarrow.dataset module provides functionality to efficiently work with tabular, potentially larger than memory and multi-file datasets:

  • A unified interface for different sources: supporting different sources and file formats (Parquet, Feather files) and different file systems (local, cloud).

  • Discovery of sources (crawling directories, handle directory-based partitioned datasets, basic schema normalization, ..)

  • Optimized reading with predicate pushdown (filtering rows), projection (selecting columns), parallel reading or fine-grained managing of tasks.

Currently, only Parquet and Feather / Arrow IPC files are supported. The goal is to expand this in the future to other file formats and data sources (e.g. database connections).

For those familiar with the existing pyarrow.parquet.ParquetDataset for reading Parquet datasets: pyarrow.dataset’s goal is similar but not specific to the Parquet format and not tied to Python: the same datasets API is exposed in the R bindings or Arrow. In addition pyarrow.dataset boasts improved performance and new features (e.g. filtering within files rather than only on partition keys).

Reading Datasets

For the examples below, let’s create a small dataset consisting of a directory with two parquet files:

In [1]: import tempfile

In [2]: import pathlib

In [3]: import pyarrow as pa

In [4]: import pyarrow.parquet as pq

In [5]: base = pathlib.Path(tempfile.gettempdir())

In [6]: (base / "parquet_dataset").mkdir(exist_ok=True)

# creating an Arrow Table
In [7]: table = pa.table({'a': range(10), 'b': np.random.randn(10), 'c': [1, 2] * 5})

# writing it into two parquet files
In [8]: pq.write_table(table.slice(0, 5), base / "parquet_dataset/data1.parquet")

In [9]: pq.write_table(table.slice(5, 10), base / "parquet_dataset/data2.parquet")

Dataset discovery

A Dataset object can be created with the dataset() function. We can pass it the path to the directory containing the data files:

In [10]: import pyarrow.dataset as ds

In [11]: dataset = ds.dataset(base / "parquet_dataset", format="parquet")

In [12]: dataset
Out[12]: <pyarrow._dataset.FileSystemDataset at 0x7f81281e54f0>

In addition to a base directory path, dataset() accepts a path to a single file or a list of file paths.

Creating a Dataset object loads nothing into memory, it only crawls the directory to find all the files:

In [13]: dataset.files
Out[13]: ['/tmp/parquet_dataset/data1.parquet', '/tmp/parquet_dataset/data2.parquet']

… and infers the dataset’s schema (by default from the first file):

In [14]: print(dataset.schema.to_string(show_field_metadata=False))
a: int64
b: double
c: int64

Using the Dataset.to_table() method we can read the dataset (or a portion of it) into a pyarrow Table (note that depending on the size of your dataset this can require a lot of memory, see below on filtering / iterative loading):

In [15]: dataset.to_table()
Out[15]: 
pyarrow.Table
a: int64
b: double
c: int64

# converting to pandas to see the contents of the scanned table
In [16]: dataset.to_table().to_pandas()
Out[16]: 
   a         b  c
0  0 -0.778317  1
1  1 -0.041500  2
2  2  0.135596  1
3  3  1.115527  2
4  4  1.690259  1
5  5  1.036367  2
6  6 -1.318701  1
7  7 -0.739963  2
8  8  1.197901  1
9  9  0.583405  2

Reading different file formats

The above examples use Parquet files as dataset source but the Dataset API provides a consistent interface across multiple file formats and sources. Currently, Parquet and Feather / Arrow IPC file format are supported; more formats are planned in the future.

If we save the table as a Feather file instead of Parquet files:

In [17]: import pyarrow.feather as feather

In [18]: feather.write_feather(table, base / "data.feather")

then we can read the Feather file using the same functions, but with specifying format="feather":

In [19]: dataset = ds.dataset(base / "data.feather", format="feather")

In [20]: dataset.to_table().to_pandas().head()
Out[20]: 
   a         b  c
0  0 -0.778317  1
1  1 -0.041500  2
2  2  0.135596  1
3  3  1.115527  2
4  4  1.690259  1

Customizing file formats

The format name as a string, like:

ds.dataset(..., format="parquet")

is short hand for a default constructed ParquetFileFormat:

ds.dataset(..., format=ds.ParquetFileForma())

The FileFormat objects can be customized using keywords. For example:

parquet_format = ds.ParquetFileFormat(read_options={'dictionary_columns': ['a']})
ds.dataset(..., format=parquet_format)

Will configure column "a" to be dictionary encoded on scan.

Filtering data

To avoid reading all data when only needing a subset, the columns and filter keywords can be used.

The columns keyword can be used to only read the specified columns:

In [21]: dataset = ds.dataset(base / "parquet_dataset", format="parquet")

In [22]: dataset.to_table(columns=['a', 'b']).to_pandas()
Out[22]: 
   a         b
0  0 -0.778317
1  1 -0.041500
2  2  0.135596
3  3  1.115527
4  4  1.690259
5  5  1.036367
6  6 -1.318701
7  7 -0.739963
8  8  1.197901
9  9  0.583405

With the filter keyword, rows which do not match the filter predicate will not be included in the returned table. The keyword expects a boolean Expression referencing at least one of the columns:

In [23]: dataset.to_table(filter=ds.field('a') >= 7).to_pandas()
Out[23]: 
   a         b  c
0  7 -0.739963  2
1  8  1.197901  1
2  9  0.583405  2

In [24]: dataset.to_table(filter=ds.field('c') == 2).to_pandas()
Out[24]: 
   a         b  c
0  1 -0.041500  2
1  3  1.115527  2
2  5  1.036367  2
3  7 -0.739963  2
4  9  0.583405  2

The easiest way to construct those Expression objects is by using the field() helper function. Any column - not just partition columns - can be referenced using the field() function (which creates a FieldExpression). Operator overloads are provided to compose filters including the comparisons (equal, larger/less than, etc), set membership testing, and boolean combinations (and, or, not):

In [25]: ds.field('a') != 3
Out[25]: <pyarrow.dataset.Expression (a != 3)>

In [26]: ds.field('a').isin([1, 2, 3])
Out[26]: 
<pyarrow.dataset.Expression is_in(a, value_set=[
  1,
  2,
  3
], skip_nulls)>

In [27]: (ds.field('a') > ds.field('b')) & (ds.field('b') > 1)
Out[27]: <pyarrow.dataset.Expression ((a > b) and (b > 1))>

Reading partitioned data

Above, a dataset consisting of a flat directory with files was shown. However, a dataset can exploit a nested directory structure defining a partitioned dataset, where the sub-directory names hold information about which subset of the data is stored in that directory.

For example, a dataset partitioned by year and month may look like on disk:

dataset_name/
  year=2007/
    month=01/
       data0.parquet
       data1.parquet
       ...
    month=02/
       data0.parquet
       data1.parquet
       ...
    month=03/
    ...
  year=2008/
    month=01/
    ...
  ...

The above partitioning scheme is using “/key=value/” directory names, as found in Apache Hive.

Let’s create a small partitioned dataset. The write_to_dataset() function can write such hive-like partitioned datasets.

In [28]: table = pa.table({'a': range(10), 'b': np.random.randn(10), 'c': [1, 2] * 5,
   ....:                   'part': ['a'] * 5 + ['b'] * 5})
   ....: 

In [29]: pq.write_to_dataset(table, str(base / "parquet_dataset_partitioned"),
   ....:                     partition_cols=['part'])
   ....: 

The above created a directory with two subdirectories (“part=a” and “part=b”), and the Parquet files written in those directories no longer include the “part” column.

Reading this dataset with dataset(), we now specify that the dataset uses a hive-like partitioning scheme with the partitioning keyword:

In [30]: dataset = ds.dataset(str(base / "parquet_dataset_partitioned"), format="parquet",
   ....:                      partitioning="hive")
   ....: 

In [31]: dataset.files
Out[31]: 
['/tmp/parquet_dataset_partitioned/part=a/011bca896eba42579ee3b5710878e184.parquet',
 '/tmp/parquet_dataset_partitioned/part=a/068db89f3ccf4b27b58fc405d8597bab.parquet',
 '/tmp/parquet_dataset_partitioned/part=a/0a21bf2c3d874a9d87344a5e46a62cd1.parquet',
 '/tmp/parquet_dataset_partitioned/part=a/0bda208595db44d7a084a24338f66c8e.parquet',
 '/tmp/parquet_dataset_partitioned/part=a/12e8056330784e629e4f85636c935689.parquet',
 '/tmp/parquet_dataset_partitioned/part=a/17307869eaae46608fd7e34ab709d944.parquet',
 '/tmp/parquet_dataset_partitioned/part=a/1c97f333220942978a0d74061f10ad7c.parquet',
 '/tmp/parquet_dataset_partitioned/part=a/40f43a0f1064419fa554eb75ebc6afff.parquet',
 '/tmp/parquet_dataset_partitioned/part=a/4422a56873a948cb8497e63bd33053eb.parquet',
 '/tmp/parquet_dataset_partitioned/part=a/521f717b42a6456dbc01c81f3855c668.parquet',
 '/tmp/parquet_dataset_partitioned/part=a/59733c4d51ff4d699792046c49fe6d71.parquet',
 '/tmp/parquet_dataset_partitioned/part=a/71b6c15fe55a4af193564c187962461d.parquet',
 '/tmp/parquet_dataset_partitioned/part=a/811d47028fdb400c8d996b3919e2fc7a.parquet',
 '/tmp/parquet_dataset_partitioned/part=a/874f3131820e46fbb5f01f3f31f5880b.parquet',
 '/tmp/parquet_dataset_partitioned/part=a/8c642c1be6824b1a9843f58b56885876.parquet',
 '/tmp/parquet_dataset_partitioned/part=a/a0e900cbb5e5413b952d80983d28e03d.parquet',
 '/tmp/parquet_dataset_partitioned/part=a/a2dc21214d4d4b0eaeb9e8005abdf9c1.parquet',
 '/tmp/parquet_dataset_partitioned/part=a/a4855f76f9834bd08db0c96f725db10a.parquet',
 '/tmp/parquet_dataset_partitioned/part=a/bc3407dcba48428ab2fa2a80282a3c08.parquet',
 '/tmp/parquet_dataset_partitioned/part=a/d0e5c8c18dc94273853aae42bff6ca49.parquet',
 '/tmp/parquet_dataset_partitioned/part=a/df3d89168279485e886e672f104f9b95.parquet',
 '/tmp/parquet_dataset_partitioned/part=a/eb6fdb18dc504fedb4d7350757967954.parquet',
 '/tmp/parquet_dataset_partitioned/part=a/ec5684713d734d4694bee3b375704b5d.parquet',
 '/tmp/parquet_dataset_partitioned/part=a/f12c0038f59d42fd9ba9ff77599912e6.parquet',
 '/tmp/parquet_dataset_partitioned/part=b/283c674f0c5f46febc6d252cfa23f89a.parquet',
 '/tmp/parquet_dataset_partitioned/part=b/3671905c4f4b4df49d1b1b083312935c.parquet',
 '/tmp/parquet_dataset_partitioned/part=b/3901921fdd7d4bfc95a942f9a79d65df.parquet',
 '/tmp/parquet_dataset_partitioned/part=b/3d08f7f6a7e94ecbacc90ba221823b82.parquet',
 '/tmp/parquet_dataset_partitioned/part=b/40a5f8ae3b944060b43ce6a0336afec4.parquet',
 '/tmp/parquet_dataset_partitioned/part=b/484eee16223748a39c9c8482e633a225.parquet',
 '/tmp/parquet_dataset_partitioned/part=b/505f0b9791b845b38f3e98b18f1a08ac.parquet',
 '/tmp/parquet_dataset_partitioned/part=b/507aa7edc7f645848ffea3b67702c34d.parquet',
 '/tmp/parquet_dataset_partitioned/part=b/5376e022ff8b44e5baa349e4aee06434.parquet',
 '/tmp/parquet_dataset_partitioned/part=b/550c3bbbd22343718eeb470c32d489c9.parquet',
 '/tmp/parquet_dataset_partitioned/part=b/5b3f2cf646c6483a9c49bcb17da2471b.parquet',
 '/tmp/parquet_dataset_partitioned/part=b/62c9714828514eb896b4adc378555d25.parquet',
 '/tmp/parquet_dataset_partitioned/part=b/630dc9618544424083717ed42551990b.parquet',
 '/tmp/parquet_dataset_partitioned/part=b/79edc96fd64244bab2b745dc4ab764a9.parquet',
 '/tmp/parquet_dataset_partitioned/part=b/8290ece298574a1da5366d263822a6c1.parquet',
 '/tmp/parquet_dataset_partitioned/part=b/92336a2fcafb4de295a473341308d578.parquet',
 '/tmp/parquet_dataset_partitioned/part=b/a494d942eca045b295abbbe787b67b5a.parquet',
 '/tmp/parquet_dataset_partitioned/part=b/b41a7d94e0f94ababa4ed695bc48244c.parquet',
 '/tmp/parquet_dataset_partitioned/part=b/be6c5dfe4d1040b4aca7713a60b7d638.parquet',
 '/tmp/parquet_dataset_partitioned/part=b/c5e247602a854b39a1948ae1ecd2ff39.parquet',
 '/tmp/parquet_dataset_partitioned/part=b/cc491a1c39554a2ebd22c748fbfc7df6.parquet',
 '/tmp/parquet_dataset_partitioned/part=b/d3fe9b6ad7134c74a79ea73810108ad8.parquet',
 '/tmp/parquet_dataset_partitioned/part=b/e5984aff6c044addbe742b6f6e6db80f.parquet',
 '/tmp/parquet_dataset_partitioned/part=b/eadcdd05d41a40e9a3a2d1b71b709afa.parquet']

Although the partition fields are not included in the actual Parquet files, they will be added back to the resulting table when scanning this dataset:

In [32]: dataset.to_table().to_pandas().head(3)
Out[32]: 
   a         b  c part
0  0 -0.021321  1    a
1  1  0.643230  2    a
2  2  0.319992  1    a

We can now filter on the partition keys, which avoids loading files altogether if they do not match the predicate:

In [33]: dataset.to_table(filter=ds.field("part") == "b").to_pandas()
Out[33]: 
     a         b  c part
0    5  0.795325  2    b
1    6 -0.583944  1    b
2    7  0.836431  2    b
3    8 -1.131249  1    b
4    9  2.504313  2    b
..  ..       ... ..  ...
115  5 -1.054258  2    b
116  6 -0.389159  1    b
117  7 -2.074697  2    b
118  8  1.057094  1    b
119  9  0.016708  2    b

[120 rows x 4 columns]

Different partitioning schemes

The above example uses a hive-like directory scheme, such as “/year=2009/month=11/day=15”. We specified this passing the partitioning="hive" keyword. In this case, the types of the partition keys are inferred from the file paths.

It is also possible to explicitly define the schema of the partition keys using the partitioning() function. For example:

part = ds.partitioning(
    pa.schema([("year", pa.int16()), ("month", pa.int8()), ("day", pa.int32())]),
    flavor="hive"
)
dataset = ds.dataset(..., partitioning=part)

“Directory partitioning” is also supported, where the segments in the file path represent the values of the partition keys without including the name (the field name are implicit in the segment’s index). For example, given field names “year”, “month”, and “day”, one path might be “/2019/11/15”.

Since the names are not included in the file paths, these must be specified when constructing a directory partitioning:

part = ds.partitioning(field_names=["year", "month", "day"])

Directory partitioning also supports providing a full schema rather than inferring types from file paths.

Reading from cloud storage

In addition to local files, pyarrow also supports reading from cloud storage. Currently, HDFS and Amazon S3-compatible storage are supported.

When passing a file URI, the file system will be inferred. For example, specifying a S3 path:

dataset = ds.dataset("s3://ursa-labs-taxi-data/", partitioning=["year", "month"])

Typically, you will want to customize the connection parameters, and then a file system object can be created and passed to the filesystem keyword:

from pyarrow import fs

s3  = fs.S3FileSystem(region="us-east-2")
dataset = ds.dataset("ursa-labs-taxi-data/", filesystem=s3,
                     partitioning=["year", "month"])

The currently available classes are S3FileSystem and HadoopFileSystem. See the Filesystem Interface docs for more details.

Reading from Minio

In addition to cloud storage, pyarrow also supports reading from a MinIO object storage instance emulating S3 APIs. Paired with toxiproxy, this is useful for testing or benchmarking.

from pyarrow import fs

# By default, MinIO will listen for unencrypted HTTP traffic.
minio = fs.S3FileSystem(scheme="http", endpoint="localhost:9000")
dataset = ds.dataset("ursa-labs-taxi-data/", filesystem=minio,
                     partitioning=["year", "month"])

Working with Parquet Datasets

While the Datasets API provides a unified interface to different file formats, some specific methods exist for Parquet Datasets.

Some processing frameworks such as Dask (optionally) use a _metadata file with partitioned datasets which includes information about the schema and the row group metadata of the full dataset. Using such file can give a more efficient creation of a parquet Dataset, since it does not need to infer the schema and crawl the directories for all Parquet files (this is especially the case for filesystems where accessing files is expensive). The parquet_dataset() function allows to create a Dataset from a partitioned dataset with a _metadata file:

dataset = ds.parquet_dataset("/path/to/dir/_metadata")

By default, the constructed Dataset object for Parquet datasets maps each fragment to a single Parquet file. If you want fragments mapping to each row group of a Parquet file, you can use the split_by_row_group() method of the fragments:

fragments = list(dataset.get_fragments())
fragments[0].split_by_row_group()

This method returns a list of new Fragments mapping to each row group of the original Fragment (Parquet file). Both get_fragments() and split_by_row_group() accept an optional filter expression to get a filtered list of fragments.

Manual specification of the Dataset

The dataset() function allows easy creation of a Dataset viewing a directory, crawling all subdirectories for files and partitioning information. However sometimes discovery is not required and the dataset’s files and partitions are already known (for example, when this information is stored in metadata). In this case it is possible to create a Dataset explicitly without any automatic discovery or inference.

For the example here, we are going to use a dataset where the file names contain additional partitioning information:

# creating a dummy dataset: directory with two files
In [34]: table = pa.table({'col1': range(3), 'col2': np.random.randn(3)})

In [35]: (base / "parquet_dataset_manual").mkdir(exist_ok=True)

In [36]: pq.write_table(table, base / "parquet_dataset_manual" / "data_2018.parquet")

In [37]: pq.write_table(table, base / "parquet_dataset_manual" / "data_2019.parquet")

To create a Dataset from a list of files, we need to specify the paths, schema, format, filesystem, and partition expressions manually:

In [38]: from pyarrow import fs

In [39]: schema = pa.schema([("year", pa.int64()), ("col1", pa.int64()), ("col2", pa.float64())])

In [40]: dataset = ds.FileSystemDataset.from_paths(
   ....:     ["data_2018.parquet", "data_2019.parquet"], schema=schema, format=ds.ParquetFileFormat(),
   ....:     filesystem=fs.SubTreeFileSystem(str(base / "parquet_dataset_manual"), fs.LocalFileSystem()),
   ....:     partitions=[ds.field('year') == 2018, ds.field('year') == 2019])
   ....: 

Since we specified the “partition expressions” for our files, this information is materialized as columns when reading the data and can be used for filtering:

In [41]: dataset.to_table().to_pandas()
Out[41]: 
   year  col1      col2
0  2018     0 -0.239433
1  2018     1 -2.254802
2  2018     2  0.595891
3  2019     0 -0.239433
4  2019     1 -2.254802
5  2019     2  0.595891

In [42]: dataset.to_table(filter=ds.field('year') == 2019).to_pandas()
Out[42]: 
   year  col1      col2
0  2019     0 -0.239433
1  2019     1 -2.254802
2  2019     2  0.595891

Manual scheduling

The to_table() method loads all selected data into memory at once resulting in a pyarrow Table. Alternatively, a dataset can also be scanned one RecordBatch at a time in an iterative manner using the scan() method:

for scan_task in dataset.scan(columns=[...], filter=...):
    for record_batch in scan_task.execute():
        # process the record batch