This is a major release from 0.11.0 and includes several new features and enhancements along with a large number of bug fixes.
Highlights include a consistent I/O API naming scheme, routines to read html, write MultiIndexes to csv files, read & write STATA data files, read & write JSON format files, Python 3 support for HDFStore, filtering of groupby expressions via filter, and a revamped replace routine that accepts regular expressions.
HDFStore
filter
replace
The I/O API is now much more consistent with a set of top level reader functions accessed like pd.read_csv() that generally return a pandas object. read_csv read_excel read_hdf read_sql read_json read_html read_stata read_clipboard The corresponding writer functions are object methods that are accessed like df.to_csv() to_csv to_excel to_hdf to_sql to_json to_html to_stata to_clipboard Fix modulo and integer division on Series,DataFrames to act similarly to float dtypes to return np.nan or np.inf as appropriate (GH3590). This correct a numpy bug that treats integer and float dtypes differently. In [1]: p = pd.DataFrame({'first': [4, 5, 8], 'second': [0, 0, 3]}) In [2]: p % 0 Out[2]: first second 0 NaN NaN 1 NaN NaN 2 NaN NaN In [3]: p % p Out[3]: first second 0 0.0 NaN 1 0.0 NaN 2 0.0 0.0 In [4]: p / p Out[4]: first second 0 1.0 NaN 1 1.0 NaN 2 1.0 1.0 In [5]: p / 0 Out[5]: first second 0 inf NaN 1 inf NaN 2 inf inf Add squeeze keyword to groupby to allow reduction from DataFrame -> Series if groups are unique. This is a Regression from 0.10.1. We are reverting back to the prior behavior. This means groupby will return the same shaped objects whether the groups are unique or not. Revert this issue (GH2893) with (GH3596). In [6]: df2 = pd.DataFrame([{"val1": 1, "val2": 20}, ...: {"val1": 1, "val2": 19}, ...: {"val1": 1, "val2": 27}, ...: {"val1": 1, "val2": 12}]) ...: In [7]: def func(dataf): ...: return dataf["val2"] - dataf["val2"].mean() ...: # squeezing the result frame to a series (because we have unique groups) In [8]: df2.groupby("val1", squeeze=True).apply(func) Out[8]: 0 0.5 1 -0.5 2 7.5 3 -7.5 Name: 1, dtype: float64 # no squeezing (the default, and behavior in 0.10.1) In [9]: df2.groupby("val1").apply(func) Out[9]: val2 0 1 2 3 val1 1 0.5 -0.5 7.5 -7.5 Raise on iloc when boolean indexing with a label based indexer mask e.g. a boolean Series, even with integer labels, will raise. Since iloc is purely positional based, the labels on the Series are not alignable (GH3631) This case is rarely used, and there are plenty of alternatives. This preserves the iloc API to be purely positional based. In [10]: df = pd.DataFrame(range(5), index=list('ABCDE'), columns=['a']) In [11]: mask = (df.a % 2 == 0) In [12]: mask Out[12]: A True B False C True D False E True Name: a, dtype: bool # this is what you should use In [13]: df.loc[mask] Out[13]: a A 0 C 2 E 4 # this will work as well In [14]: df.iloc[mask.values] Out[14]: a A 0 C 2 E 4 df.iloc[mask] will raise a ValueError The raise_on_error argument to plotting functions is removed. Instead, plotting functions raise a TypeError when the dtype of the object is object to remind you to avoid object arrays whenever possible and thus you should cast to an appropriate numeric dtype if you need to plot something. Add colormap keyword to DataFrame plotting methods. Accepts either a matplotlib colormap object (ie, matplotlib.cm.jet) or a string name of such an object (ie, ‘jet’). The colormap is sampled to select the color for each column. Please see Colormaps for more information. (GH3860) DataFrame.interpolate() is now deprecated. Please use DataFrame.fillna() and DataFrame.replace() instead. (GH3582, GH3675, GH3676) the method and axis arguments of DataFrame.replace() are deprecated DataFrame.replace ‘s infer_types parameter is removed and now performs conversion by default. (GH3907) Add the keyword allow_duplicates to DataFrame.insert to allow a duplicate column to be inserted if True, default is False (same as prior to 0.12) (GH3679) Implement __nonzero__ for NDFrame objects (GH3691, GH3696) IO api added top-level function read_excel to replace the following, The original API is deprecated and will be removed in a future version from pandas.io.parsers import ExcelFile xls = ExcelFile('path_to_file.xls') xls.parse('Sheet1', index_col=None, na_values=['NA']) With import pandas as pd pd.read_excel('path_to_file.xls', 'Sheet1', index_col=None, na_values=['NA']) added top-level function read_sql that is equivalent to the following from pandas.io.sql import read_frame read_frame(...) DataFrame.to_html and DataFrame.to_latex now accept a path for their first argument (GH3702) Do not allow astypes on datetime64[ns] except to object, and timedelta64[ns] to object/int (GH3425) The behavior of datetime64 dtypes has changed with respect to certain so-called reduction operations (GH3726). The following operations now raise a TypeError when performed on a Series and return an empty Series when performed on a DataFrame similar to performing these operations on, for example, a DataFrame of slice objects: sum, prod, mean, std, var, skew, kurt, corr, and cov read_html now defaults to None when reading, and falls back on bs4 + html5lib when lxml fails to parse. a list of parsers to try until success is also valid The internal pandas class hierarchy has changed (slightly). The previous PandasObject now is called PandasContainer and a new PandasObject has become the base class for PandasContainer as well as Index, Categorical, GroupBy, SparseList, and SparseArray (+ their base classes). Currently, PandasObject provides string methods (from StringMixin). (GH4090, GH4092) New StringMixin that, given a __unicode__ method, gets python 2 and python 3 compatible string methods (__str__, __bytes__, and __repr__). Plus string safety throughout. Now employed in many places throughout the pandas library. (GH4090, GH4092)
The I/O API is now much more consistent with a set of top level reader functions accessed like pd.read_csv() that generally return a pandas object.
reader
pd.read_csv()
pandas
read_csv
read_excel
read_hdf
read_sql
read_json
read_html
read_stata
read_clipboard
The corresponding writer functions are object methods that are accessed like df.to_csv()
writer
df.to_csv()
to_csv
to_excel
to_hdf
to_sql
to_json
to_html
to_stata
to_clipboard
Fix modulo and integer division on Series,DataFrames to act similarly to float dtypes to return np.nan or np.inf as appropriate (GH3590). This correct a numpy bug that treats integer and float dtypes differently.
float
np.nan
np.inf
integer
In [1]: p = pd.DataFrame({'first': [4, 5, 8], 'second': [0, 0, 3]}) In [2]: p % 0 Out[2]: first second 0 NaN NaN 1 NaN NaN 2 NaN NaN In [3]: p % p Out[3]: first second 0 0.0 NaN 1 0.0 NaN 2 0.0 0.0 In [4]: p / p Out[4]: first second 0 1.0 NaN 1 1.0 NaN 2 1.0 1.0 In [5]: p / 0 Out[5]: first second 0 inf NaN 1 inf NaN 2 inf inf
Add squeeze keyword to groupby to allow reduction from DataFrame -> Series if groups are unique. This is a Regression from 0.10.1. We are reverting back to the prior behavior. This means groupby will return the same shaped objects whether the groups are unique or not. Revert this issue (GH2893) with (GH3596).
squeeze
groupby
In [6]: df2 = pd.DataFrame([{"val1": 1, "val2": 20}, ...: {"val1": 1, "val2": 19}, ...: {"val1": 1, "val2": 27}, ...: {"val1": 1, "val2": 12}]) ...: In [7]: def func(dataf): ...: return dataf["val2"] - dataf["val2"].mean() ...: # squeezing the result frame to a series (because we have unique groups) In [8]: df2.groupby("val1", squeeze=True).apply(func) Out[8]: 0 0.5 1 -0.5 2 7.5 3 -7.5 Name: 1, dtype: float64 # no squeezing (the default, and behavior in 0.10.1) In [9]: df2.groupby("val1").apply(func) Out[9]: val2 0 1 2 3 val1 1 0.5 -0.5 7.5 -7.5
Raise on iloc when boolean indexing with a label based indexer mask e.g. a boolean Series, even with integer labels, will raise. Since iloc is purely positional based, the labels on the Series are not alignable (GH3631)
iloc
This case is rarely used, and there are plenty of alternatives. This preserves the iloc API to be purely positional based.
In [10]: df = pd.DataFrame(range(5), index=list('ABCDE'), columns=['a']) In [11]: mask = (df.a % 2 == 0) In [12]: mask Out[12]: A True B False C True D False E True Name: a, dtype: bool # this is what you should use In [13]: df.loc[mask] Out[13]: a A 0 C 2 E 4 # this will work as well In [14]: df.iloc[mask.values] Out[14]: a A 0 C 2 E 4
df.iloc[mask] will raise a ValueError
df.iloc[mask]
ValueError
The raise_on_error argument to plotting functions is removed. Instead, plotting functions raise a TypeError when the dtype of the object is object to remind you to avoid object arrays whenever possible and thus you should cast to an appropriate numeric dtype if you need to plot something.
raise_on_error
TypeError
dtype
object
Add colormap keyword to DataFrame plotting methods. Accepts either a matplotlib colormap object (ie, matplotlib.cm.jet) or a string name of such an object (ie, ‘jet’). The colormap is sampled to select the color for each column. Please see Colormaps for more information. (GH3860)
colormap
DataFrame.interpolate() is now deprecated. Please use DataFrame.fillna() and DataFrame.replace() instead. (GH3582, GH3675, GH3676)
DataFrame.interpolate()
DataFrame.fillna()
DataFrame.replace()
the method and axis arguments of DataFrame.replace() are deprecated
method
axis
DataFrame.replace ‘s infer_types parameter is removed and now performs conversion by default. (GH3907)
DataFrame.replace
infer_types
Add the keyword allow_duplicates to DataFrame.insert to allow a duplicate column to be inserted if True, default is False (same as prior to 0.12) (GH3679)
allow_duplicates
DataFrame.insert
True
False
Implement __nonzero__ for NDFrame objects (GH3691, GH3696)
__nonzero__
NDFrame
IO api
added top-level function read_excel to replace the following, The original API is deprecated and will be removed in a future version
from pandas.io.parsers import ExcelFile xls = ExcelFile('path_to_file.xls') xls.parse('Sheet1', index_col=None, na_values=['NA'])
With
import pandas as pd pd.read_excel('path_to_file.xls', 'Sheet1', index_col=None, na_values=['NA'])
added top-level function read_sql that is equivalent to the following
from pandas.io.sql import read_frame read_frame(...)
DataFrame.to_html and DataFrame.to_latex now accept a path for their first argument (GH3702)
DataFrame.to_html
DataFrame.to_latex
Do not allow astypes on datetime64[ns] except to object, and timedelta64[ns] to object/int (GH3425)
datetime64[ns]
timedelta64[ns]
object/int
The behavior of datetime64 dtypes has changed with respect to certain so-called reduction operations (GH3726). The following operations now raise a TypeError when performed on a Series and return an empty Series when performed on a DataFrame similar to performing these operations on, for example, a DataFrame of slice objects:
datetime64
Series
DataFrame
slice
sum, prod, mean, std, var, skew, kurt, corr, and cov
read_html now defaults to None when reading, and falls back on bs4 + html5lib when lxml fails to parse. a list of parsers to try until success is also valid
None
bs4
html5lib
The internal pandas class hierarchy has changed (slightly). The previous PandasObject now is called PandasContainer and a new PandasObject has become the base class for PandasContainer as well as Index, Categorical, GroupBy, SparseList, and SparseArray (+ their base classes). Currently, PandasObject provides string methods (from StringMixin). (GH4090, GH4092)
PandasObject
PandasContainer
Index
Categorical
GroupBy
SparseList
SparseArray
StringMixin
New StringMixin that, given a __unicode__ method, gets python 2 and python 3 compatible string methods (__str__, __bytes__, and __repr__). Plus string safety throughout. Now employed in many places throughout the pandas library. (GH4090, GH4092)
__unicode__
__str__
__bytes__
__repr__
pd.read_html() can now parse HTML strings, files or urls and return DataFrames, courtesy of @cpcloud. (GH3477, GH3605, GH3606, GH3616). It works with a single parser backend: BeautifulSoup4 + html5lib See the docs You can use pd.read_html() to read the output from DataFrame.to_html() like so In [15]: df = pd.DataFrame({'a': range(3), 'b': list('abc')}) In [16]: print(df) a b 0 0 a 1 1 b 2 2 c In [17]: html = df.to_html() In [18]: alist = pd.read_html(html, index_col=0) --------------------------------------------------------------------------- ImportError Traceback (most recent call last) <ipython-input-18-bb79481b6549> in <module> ----> 1 alist = pd.read_html(html, index_col=0) ~/scipy/pandas/pandas/io/html.py in read_html(io, match, flavor, header, index_col, skiprows, attrs, parse_dates, thousands, encoding, decimal, converters, na_values, keep_default_na, displayed_only) 1097 na_values=na_values, 1098 keep_default_na=keep_default_na, -> 1099 displayed_only=displayed_only, 1100 ) ~/scipy/pandas/pandas/io/html.py in _parse(flavor, io, match, attrs, encoding, displayed_only, **kwargs) 889 retained = None 890 for flav in flavor: --> 891 parser = _parser_dispatch(flav) 892 p = parser(io, compiled_match, attrs, encoding, displayed_only) 893 ~/scipy/pandas/pandas/io/html.py in _parser_dispatch(flavor) 846 else: 847 if not _HAS_LXML: --> 848 raise ImportError("lxml not found, please install it") 849 return _valid_parsers[flavor] 850 ImportError: lxml not found, please install it In [19]: print(df == alist[0]) --------------------------------------------------------------------------- NameError Traceback (most recent call last) <ipython-input-19-51682dab5850> in <module> ----> 1 print(df == alist[0]) NameError: name 'alist' is not defined Note that alist here is a Python list so pd.read_html() and DataFrame.to_html() are not inverses. pd.read_html() no longer performs hard conversion of date strings (GH3656). Warning You may have to install an older version of BeautifulSoup4, See the installation docs Added module for reading and writing Stata files: pandas.io.stata (GH1512) accessible via read_stata top-level function for reading, and to_stata DataFrame method for writing, See the docs Added module for reading and writing json format files: pandas.io.json accessible via read_json top-level function for reading, and to_json DataFrame method for writing, See the docs various issues (GH1226, GH3804, GH3876, GH3867, GH1305) MultiIndex column support for reading and writing csv format files The header option in read_csv now accepts a list of the rows from which to read the index. The option, tupleize_cols can now be specified in both to_csv and read_csv, to provide compatibility for the pre 0.12 behavior of writing and reading MultIndex columns via a list of tuples. The default in 0.12 is to write lists of tuples and not interpret list of tuples as a MultiIndex column. Note: The default behavior in 0.12 remains unchanged from prior versions, but starting with 0.13, the default to write and read MultiIndex columns will be in the new format. (GH3571, GH1651, GH3141) If an index_col is not specified (e.g. you don’t have an index, or wrote it with df.to_csv(..., index=False), then any names on the columns index will be lost. In [20]: from pandas._testing import makeCustomDataframe as mkdf In [21]: df = mkdf(5, 3, r_idx_nlevels=2, c_idx_nlevels=4) In [22]: df.to_csv('mi.csv') In [23]: print(open('mi.csv').read()) C0,,C_l0_g0,C_l0_g1,C_l0_g2 C1,,C_l1_g0,C_l1_g1,C_l1_g2 C2,,C_l2_g0,C_l2_g1,C_l2_g2 C3,,C_l3_g0,C_l3_g1,C_l3_g2 R0,R1,,, R_l0_g0,R_l1_g0,R0C0,R0C1,R0C2 R_l0_g1,R_l1_g1,R1C0,R1C1,R1C2 R_l0_g2,R_l1_g2,R2C0,R2C1,R2C2 R_l0_g3,R_l1_g3,R3C0,R3C1,R3C2 R_l0_g4,R_l1_g4,R4C0,R4C1,R4C2 In [24]: pd.read_csv('mi.csv', header=[0, 1, 2, 3], index_col=[0, 1]) Out[24]: C0 C_l0_g0 C_l0_g1 C_l0_g2 C1 C_l1_g0 C_l1_g1 C_l1_g2 C2 C_l2_g0 C_l2_g1 C_l2_g2 C3 C_l3_g0 C_l3_g1 C_l3_g2 R0 R1 R_l0_g0 R_l1_g0 R0C0 R0C1 R0C2 R_l0_g1 R_l1_g1 R1C0 R1C1 R1C2 R_l0_g2 R_l1_g2 R2C0 R2C1 R2C2 R_l0_g3 R_l1_g3 R3C0 R3C1 R3C2 R_l0_g4 R_l1_g4 R4C0 R4C1 R4C2 Support for HDFStore (via PyTables 3.0.0) on Python3 Iterator support via read_hdf that automatically opens and closes the store when iteration is finished. This is only for tables In [25]: path = 'store_iterator.h5' In [26]: pd.DataFrame(np.random.randn(10, 2)).to_hdf(path, 'df', table=True) In [27]: for df in pd.read_hdf(path, 'df', chunksize=3): ....: print(df) ....: 0 1 0 0.713216 -0.778461 1 -0.661062 0.862877 2 0.344342 0.149565 0 1 3 -0.626968 -0.875772 4 -0.930687 -0.218983 5 0.949965 -0.442354 0 1 6 -0.402985 1.111358 7 -0.241527 -0.670477 8 0.049355 0.632633 0 1 9 -1.502767 -1.225492 read_csv will now throw a more informative error message when a file contains no columns, e.g., all newline characters
pd.read_html() can now parse HTML strings, files or urls and return DataFrames, courtesy of @cpcloud. (GH3477, GH3605, GH3606, GH3616). It works with a single parser backend: BeautifulSoup4 + html5lib See the docs
pd.read_html()
You can use pd.read_html() to read the output from DataFrame.to_html() like so
DataFrame.to_html()
In [15]: df = pd.DataFrame({'a': range(3), 'b': list('abc')}) In [16]: print(df) a b 0 0 a 1 1 b 2 2 c In [17]: html = df.to_html() In [18]: alist = pd.read_html(html, index_col=0) --------------------------------------------------------------------------- ImportError Traceback (most recent call last) <ipython-input-18-bb79481b6549> in <module> ----> 1 alist = pd.read_html(html, index_col=0) ~/scipy/pandas/pandas/io/html.py in read_html(io, match, flavor, header, index_col, skiprows, attrs, parse_dates, thousands, encoding, decimal, converters, na_values, keep_default_na, displayed_only) 1097 na_values=na_values, 1098 keep_default_na=keep_default_na, -> 1099 displayed_only=displayed_only, 1100 ) ~/scipy/pandas/pandas/io/html.py in _parse(flavor, io, match, attrs, encoding, displayed_only, **kwargs) 889 retained = None 890 for flav in flavor: --> 891 parser = _parser_dispatch(flav) 892 p = parser(io, compiled_match, attrs, encoding, displayed_only) 893 ~/scipy/pandas/pandas/io/html.py in _parser_dispatch(flavor) 846 else: 847 if not _HAS_LXML: --> 848 raise ImportError("lxml not found, please install it") 849 return _valid_parsers[flavor] 850 ImportError: lxml not found, please install it In [19]: print(df == alist[0]) --------------------------------------------------------------------------- NameError Traceback (most recent call last) <ipython-input-19-51682dab5850> in <module> ----> 1 print(df == alist[0]) NameError: name 'alist' is not defined
Note that alist here is a Python list so pd.read_html() and DataFrame.to_html() are not inverses.
alist
list
pd.read_html() no longer performs hard conversion of date strings (GH3656).
Warning
You may have to install an older version of BeautifulSoup4, See the installation docs
Added module for reading and writing Stata files: pandas.io.stata (GH1512) accessible via read_stata top-level function for reading, and to_stata DataFrame method for writing, See the docs
pandas.io.stata
Added module for reading and writing json format files: pandas.io.json accessible via read_json top-level function for reading, and to_json DataFrame method for writing, See the docs various issues (GH1226, GH3804, GH3876, GH3867, GH1305)
pandas.io.json
MultiIndex column support for reading and writing csv format files
MultiIndex
The header option in read_csv now accepts a list of the rows from which to read the index.
header
The option, tupleize_cols can now be specified in both to_csv and read_csv, to provide compatibility for the pre 0.12 behavior of writing and reading MultIndex columns via a list of tuples. The default in 0.12 is to write lists of tuples and not interpret list of tuples as a MultiIndex column.
tupleize_cols
MultIndex
Note: The default behavior in 0.12 remains unchanged from prior versions, but starting with 0.13, the default to write and read MultiIndex columns will be in the new format. (GH3571, GH1651, GH3141)
If an index_col is not specified (e.g. you don’t have an index, or wrote it with df.to_csv(..., index=False), then any names on the columns index will be lost.
index_col
df.to_csv(..., index=False
names
In [20]: from pandas._testing import makeCustomDataframe as mkdf In [21]: df = mkdf(5, 3, r_idx_nlevels=2, c_idx_nlevels=4) In [22]: df.to_csv('mi.csv') In [23]: print(open('mi.csv').read()) C0,,C_l0_g0,C_l0_g1,C_l0_g2 C1,,C_l1_g0,C_l1_g1,C_l1_g2 C2,,C_l2_g0,C_l2_g1,C_l2_g2 C3,,C_l3_g0,C_l3_g1,C_l3_g2 R0,R1,,, R_l0_g0,R_l1_g0,R0C0,R0C1,R0C2 R_l0_g1,R_l1_g1,R1C0,R1C1,R1C2 R_l0_g2,R_l1_g2,R2C0,R2C1,R2C2 R_l0_g3,R_l1_g3,R3C0,R3C1,R3C2 R_l0_g4,R_l1_g4,R4C0,R4C1,R4C2 In [24]: pd.read_csv('mi.csv', header=[0, 1, 2, 3], index_col=[0, 1]) Out[24]: C0 C_l0_g0 C_l0_g1 C_l0_g2 C1 C_l1_g0 C_l1_g1 C_l1_g2 C2 C_l2_g0 C_l2_g1 C_l2_g2 C3 C_l3_g0 C_l3_g1 C_l3_g2 R0 R1 R_l0_g0 R_l1_g0 R0C0 R0C1 R0C2 R_l0_g1 R_l1_g1 R1C0 R1C1 R1C2 R_l0_g2 R_l1_g2 R2C0 R2C1 R2C2 R_l0_g3 R_l1_g3 R3C0 R3C1 R3C2 R_l0_g4 R_l1_g4 R4C0 R4C1 R4C2
Support for HDFStore (via PyTables 3.0.0) on Python3
PyTables 3.0.0
Iterator support via read_hdf that automatically opens and closes the store when iteration is finished. This is only for tables
In [25]: path = 'store_iterator.h5' In [26]: pd.DataFrame(np.random.randn(10, 2)).to_hdf(path, 'df', table=True) In [27]: for df in pd.read_hdf(path, 'df', chunksize=3): ....: print(df) ....: 0 1 0 0.713216 -0.778461 1 -0.661062 0.862877 2 0.344342 0.149565 0 1 3 -0.626968 -0.875772 4 -0.930687 -0.218983 5 0.949965 -0.442354 0 1 6 -0.402985 1.111358 7 -0.241527 -0.670477 8 0.049355 0.632633 0 1 9 -1.502767 -1.225492
read_csv will now throw a more informative error message when a file contains no columns, e.g., all newline characters
DataFrame.replace() now allows regular expressions on contained Series with object dtype. See the examples section in the regular docs Replacing via String Expression For example you can do In [25]: df = pd.DataFrame({'a': list('ab..'), 'b': [1, 2, 3, 4]}) In [26]: df.replace(regex=r'\s*\.\s*', value=np.nan) Out[26]: a b 0 a 1 1 b 2 2 NaN 3 3 NaN 4 to replace all occurrences of the string '.' with zero or more instances of surrounding white space with NaN. Regular string replacement still works as expected. For example, you can do In [27]: df.replace('.', np.nan) Out[27]: a b 0 a 1 1 b 2 2 NaN 3 3 NaN 4 to replace all occurrences of the string '.' with NaN. pd.melt() now accepts the optional parameters var_name and value_name to specify custom column names of the returned DataFrame. pd.set_option() now allows N option, value pairs (GH3667). Let’s say that we had an option 'a.b' and another option 'b.c'. We can set them at the same time: In [31]: pd.get_option('a.b') Out[31]: 2 In [32]: pd.get_option('b.c') Out[32]: 3 In [33]: pd.set_option('a.b', 1, 'b.c', 4) In [34]: pd.get_option('a.b') Out[34]: 1 In [35]: pd.get_option('b.c') Out[35]: 4 The filter method for group objects returns a subset of the original object. Suppose we want to take only elements that belong to groups with a group sum greater than 2. In [28]: sf = pd.Series([1, 1, 2, 3, 3, 3]) In [29]: sf.groupby(sf).filter(lambda x: x.sum() > 2) Out[29]: 3 3 4 3 5 3 dtype: int64 The argument of filter must a function that, applied to the group as a whole, returns True or False. Another useful operation is filtering out elements that belong to groups with only a couple members. In [30]: dff = pd.DataFrame({'A': np.arange(8), 'B': list('aabbbbcc')}) In [31]: dff.groupby('B').filter(lambda x: len(x) > 2) Out[31]: A B 2 2 b 3 3 b 4 4 b 5 5 b Alternatively, instead of dropping the offending groups, we can return a like-indexed objects where the groups that do not pass the filter are filled with NaNs. In [32]: dff.groupby('B').filter(lambda x: len(x) > 2, dropna=False) Out[32]: A B 0 NaN NaN 1 NaN NaN 2 2.0 b 3 3.0 b 4 4.0 b 5 5.0 b 6 NaN NaN 7 NaN NaN Series and DataFrame hist methods now take a figsize argument (GH3834) DatetimeIndexes no longer try to convert mixed-integer indexes during join operations (GH3877) Timestamp.min and Timestamp.max now represent valid Timestamp instances instead of the default datetime.min and datetime.max (respectively), thanks @SleepingPills read_html now raises when no tables are found and BeautifulSoup==4.2.0 is detected (GH4214)
DataFrame.replace() now allows regular expressions on contained Series with object dtype. See the examples section in the regular docs Replacing via String Expression
For example you can do
In [25]: df = pd.DataFrame({'a': list('ab..'), 'b': [1, 2, 3, 4]}) In [26]: df.replace(regex=r'\s*\.\s*', value=np.nan) Out[26]: a b 0 a 1 1 b 2 2 NaN 3 3 NaN 4
to replace all occurrences of the string '.' with zero or more instances of surrounding white space with NaN.
'.'
NaN
Regular string replacement still works as expected. For example, you can do
In [27]: df.replace('.', np.nan) Out[27]: a b 0 a 1 1 b 2 2 NaN 3 3 NaN 4
to replace all occurrences of the string '.' with NaN.
pd.melt() now accepts the optional parameters var_name and value_name to specify custom column names of the returned DataFrame.
pd.melt()
var_name
value_name
pd.set_option() now allows N option, value pairs (GH3667).
pd.set_option()
Let’s say that we had an option 'a.b' and another option 'b.c'. We can set them at the same time:
'a.b'
'b.c'
In [31]: pd.get_option('a.b') Out[31]: 2 In [32]: pd.get_option('b.c') Out[32]: 3 In [33]: pd.set_option('a.b', 1, 'b.c', 4) In [34]: pd.get_option('a.b') Out[34]: 1 In [35]: pd.get_option('b.c') Out[35]: 4
The filter method for group objects returns a subset of the original object. Suppose we want to take only elements that belong to groups with a group sum greater than 2.
In [28]: sf = pd.Series([1, 1, 2, 3, 3, 3]) In [29]: sf.groupby(sf).filter(lambda x: x.sum() > 2) Out[29]: 3 3 4 3 5 3 dtype: int64
The argument of filter must a function that, applied to the group as a whole, returns True or False.
Another useful operation is filtering out elements that belong to groups with only a couple members.
In [30]: dff = pd.DataFrame({'A': np.arange(8), 'B': list('aabbbbcc')}) In [31]: dff.groupby('B').filter(lambda x: len(x) > 2) Out[31]: A B 2 2 b 3 3 b 4 4 b 5 5 b
Alternatively, instead of dropping the offending groups, we can return a like-indexed objects where the groups that do not pass the filter are filled with NaNs.
In [32]: dff.groupby('B').filter(lambda x: len(x) > 2, dropna=False) Out[32]: A B 0 NaN NaN 1 NaN NaN 2 2.0 b 3 3.0 b 4 4.0 b 5 5.0 b 6 NaN NaN 7 NaN NaN
Series and DataFrame hist methods now take a figsize argument (GH3834)
figsize
DatetimeIndexes no longer try to convert mixed-integer indexes during join operations (GH3877)
Timestamp.min and Timestamp.max now represent valid Timestamp instances instead of the default datetime.min and datetime.max (respectively), thanks @SleepingPills
read_html now raises when no tables are found and BeautifulSoup==4.2.0 is detected (GH4214)
Added experimental CustomBusinessDay class to support DateOffsets with custom holiday calendars and custom weekmasks. (GH2301) Note This uses the numpy.busdaycalendar API introduced in Numpy 1.7 and therefore requires Numpy 1.7.0 or newer. In [33]: from pandas.tseries.offsets import CustomBusinessDay In [34]: from datetime import datetime # As an interesting example, let's look at Egypt where # a Friday-Saturday weekend is observed. In [35]: weekmask_egypt = 'Sun Mon Tue Wed Thu' # They also observe International Workers' Day so let's # add that for a couple of years In [36]: holidays = ['2012-05-01', datetime(2013, 5, 1), np.datetime64('2014-05-01')] In [37]: bday_egypt = CustomBusinessDay(holidays=holidays, weekmask=weekmask_egypt) In [38]: dt = datetime(2013, 4, 30) In [39]: print(dt + 2 * bday_egypt) 2013-05-05 00:00:00 In [40]: dts = pd.date_range(dt, periods=5, freq=bday_egypt) In [41]: print(pd.Series(dts.weekday, dts).map(pd.Series('Mon Tue Wed Thu Fri Sat Sun'.split()))) 2013-04-30 Tue 2013-05-02 Thu 2013-05-05 Sun 2013-05-06 Mon 2013-05-07 Tue Freq: C, dtype: object
Added experimental CustomBusinessDay class to support DateOffsets with custom holiday calendars and custom weekmasks. (GH2301)
CustomBusinessDay
DateOffsets
Note
This uses the numpy.busdaycalendar API introduced in Numpy 1.7 and therefore requires Numpy 1.7.0 or newer.
numpy.busdaycalendar
In [33]: from pandas.tseries.offsets import CustomBusinessDay In [34]: from datetime import datetime # As an interesting example, let's look at Egypt where # a Friday-Saturday weekend is observed. In [35]: weekmask_egypt = 'Sun Mon Tue Wed Thu' # They also observe International Workers' Day so let's # add that for a couple of years In [36]: holidays = ['2012-05-01', datetime(2013, 5, 1), np.datetime64('2014-05-01')] In [37]: bday_egypt = CustomBusinessDay(holidays=holidays, weekmask=weekmask_egypt) In [38]: dt = datetime(2013, 4, 30) In [39]: print(dt + 2 * bday_egypt) 2013-05-05 00:00:00 In [40]: dts = pd.date_range(dt, periods=5, freq=bday_egypt) In [41]: print(pd.Series(dts.weekday, dts).map(pd.Series('Mon Tue Wed Thu Fri Sat Sun'.split()))) 2013-04-30 Tue 2013-05-02 Thu 2013-05-05 Sun 2013-05-06 Mon 2013-05-07 Tue Freq: C, dtype: object
Plotting functions now raise a TypeError before trying to plot anything if the associated objects have have a dtype of object (GH1818, GH3572, GH3911, GH3912), but they will try to convert object arrays to numeric arrays if possible so that you can still plot, for example, an object array with floats. This happens before any drawing takes place which eliminates any spurious plots from showing up. fillna methods now raise a TypeError if the value parameter is a list or tuple. Series.str now supports iteration (GH3638). You can iterate over the individual elements of each string in the Series. Each iteration yields yields a Series with either a single character at each index of the original Series or NaN. For example, In [42]: strs = 'go', 'bow', 'joe', 'slow' In [43]: ds = pd.Series(strs) In [44]: for s in ds.str: ....: print(s) ....: 0 g 1 b 2 j 3 s dtype: object 0 o 1 o 2 o 3 l dtype: object 0 NaN 1 w 2 e 3 o dtype: object 0 NaN 1 NaN 2 NaN 3 w dtype: object In [45]: s Out[45]: 0 NaN 1 NaN 2 NaN 3 w dtype: object In [46]: s.dropna().values.item() == 'w' Out[46]: True The last element yielded by the iterator will be a Series containing the last element of the longest string in the Series with all other elements being NaN. Here since 'slow' is the longest string and there are no other strings with the same length 'w' is the only non-null string in the yielded Series. HDFStore will retain index attributes (freq,tz,name) on recreation (GH3499) will warn with a AttributeConflictWarning if you are attempting to append an index with a different frequency than the existing, or attempting to append an index with a different name than the existing support datelike columns with a timezone as data_columns (GH2852) Non-unique index support clarified (GH3468). Fix assigning a new index to a duplicate index in a DataFrame would fail (GH3468) Fix construction of a DataFrame with a duplicate index ref_locs support to allow duplicative indices across dtypes, allows iget support to always find the index (even across dtypes) (GH2194) applymap on a DataFrame with a non-unique index now works (removed warning) (GH2786), and fix (GH3230) Fix to_csv to handle non-unique columns (GH3495) Duplicate indexes with getitem will return items in the correct order (GH3455, GH3457) and handle missing elements like unique indices (GH3561) Duplicate indexes with and empty DataFrame.from_records will return a correct frame (GH3562) Concat to produce a non-unique columns when duplicates are across dtypes is fixed (GH3602) Allow insert/delete to non-unique columns (GH3679) Non-unique indexing with a slice via loc and friends fixed (GH3659) Allow insert/delete to non-unique columns (GH3679) Extend reindex to correctly deal with non-unique indices (GH3679) DataFrame.itertuples() now works with frames with duplicate column names (GH3873) Bug in non-unique indexing via iloc (GH4017); added takeable argument to reindex for location-based taking Allow non-unique indexing in series via .ix/.loc and __getitem__ (GH4246) Fixed non-unique indexing memory allocation issue with .ix/.loc (GH4280) DataFrame.from_records did not accept empty recarrays (GH3682) read_html now correctly skips tests (GH3741) Fixed a bug where DataFrame.replace with a compiled regular expression in the to_replace argument wasn’t working (GH3907) Improved network test decorator to catch IOError (and therefore URLError as well). Added with_connectivity_check decorator to allow explicitly checking a website as a proxy for seeing if there is network connectivity. Plus, new optional_args decorator factory for decorators. (GH3910, GH3914) Fixed testing issue where too many sockets where open thus leading to a connection reset issue (GH3982, GH3985, GH4028, GH4054) Fixed failing tests in test_yahoo, test_google where symbols were not retrieved but were being accessed (GH3982, GH3985, GH4028, GH4054) Series.hist will now take the figure from the current environment if one is not passed Fixed bug where a 1xN DataFrame would barf on a 1xN mask (GH4071) Fixed running of tox under python3 where the pickle import was getting rewritten in an incompatible way (GH4062, GH4063) Fixed bug where sharex and sharey were not being passed to grouped_hist (GH4089) Fixed bug in DataFrame.replace where a nested dict wasn’t being iterated over when regex=False (GH4115) Fixed bug in the parsing of microseconds when using the format argument in to_datetime (GH4152) Fixed bug in PandasAutoDateLocator where invert_xaxis triggered incorrectly MilliSecondLocator (GH3990) Fixed bug in plotting that wasn’t raising on invalid colormap for matplotlib 1.1.1 (GH4215) Fixed the legend displaying in DataFrame.plot(kind='kde') (GH4216) Fixed bug where Index slices weren’t carrying the name attribute (GH4226) Fixed bug in initializing DatetimeIndex with an array of strings in a certain time zone (GH4229) Fixed bug where html5lib wasn’t being properly skipped (GH4265) Fixed bug where get_data_famafrench wasn’t using the correct file edges (GH4281)
Plotting functions now raise a TypeError before trying to plot anything if the associated objects have have a dtype of object (GH1818, GH3572, GH3911, GH3912), but they will try to convert object arrays to numeric arrays if possible so that you can still plot, for example, an object array with floats. This happens before any drawing takes place which eliminates any spurious plots from showing up.
fillna methods now raise a TypeError if the value parameter is a list or tuple.
fillna
value
Series.str now supports iteration (GH3638). You can iterate over the individual elements of each string in the Series. Each iteration yields yields a Series with either a single character at each index of the original Series or NaN. For example,
Series.str
In [42]: strs = 'go', 'bow', 'joe', 'slow' In [43]: ds = pd.Series(strs) In [44]: for s in ds.str: ....: print(s) ....: 0 g 1 b 2 j 3 s dtype: object 0 o 1 o 2 o 3 l dtype: object 0 NaN 1 w 2 e 3 o dtype: object 0 NaN 1 NaN 2 NaN 3 w dtype: object In [45]: s Out[45]: 0 NaN 1 NaN 2 NaN 3 w dtype: object In [46]: s.dropna().values.item() == 'w' Out[46]: True
The last element yielded by the iterator will be a Series containing the last element of the longest string in the Series with all other elements being NaN. Here since 'slow' is the longest string and there are no other strings with the same length 'w' is the only non-null string in the yielded Series.
'slow'
'w'
will retain index attributes (freq,tz,name) on recreation (GH3499)
will warn with a AttributeConflictWarning if you are attempting to append an index with a different frequency than the existing, or attempting to append an index with a different name than the existing
AttributeConflictWarning
support datelike columns with a timezone as data_columns (GH2852)
Non-unique index support clarified (GH3468).
Fix assigning a new index to a duplicate index in a DataFrame would fail (GH3468)
Fix construction of a DataFrame with a duplicate index
ref_locs support to allow duplicative indices across dtypes, allows iget support to always find the index (even across dtypes) (GH2194)
applymap on a DataFrame with a non-unique index now works (removed warning) (GH2786), and fix (GH3230)
Fix to_csv to handle non-unique columns (GH3495)
Duplicate indexes with getitem will return items in the correct order (GH3455, GH3457) and handle missing elements like unique indices (GH3561)
Duplicate indexes with and empty DataFrame.from_records will return a correct frame (GH3562)
Concat to produce a non-unique columns when duplicates are across dtypes is fixed (GH3602)
Allow insert/delete to non-unique columns (GH3679)
Non-unique indexing with a slice via loc and friends fixed (GH3659)
loc
Extend reindex to correctly deal with non-unique indices (GH3679)
reindex
DataFrame.itertuples() now works with frames with duplicate column names (GH3873)
DataFrame.itertuples()
Bug in non-unique indexing via iloc (GH4017); added takeable argument to reindex for location-based taking
takeable
Allow non-unique indexing in series via .ix/.loc and __getitem__ (GH4246)
.ix/.loc
__getitem__
Fixed non-unique indexing memory allocation issue with .ix/.loc (GH4280)
DataFrame.from_records did not accept empty recarrays (GH3682)
DataFrame.from_records
read_html now correctly skips tests (GH3741)
Fixed a bug where DataFrame.replace with a compiled regular expression in the to_replace argument wasn’t working (GH3907)
to_replace
Improved network test decorator to catch IOError (and therefore URLError as well). Added with_connectivity_check decorator to allow explicitly checking a website as a proxy for seeing if there is network connectivity. Plus, new optional_args decorator factory for decorators. (GH3910, GH3914)
network
IOError
URLError
with_connectivity_check
optional_args
Fixed testing issue where too many sockets where open thus leading to a connection reset issue (GH3982, GH3985, GH4028, GH4054)
Fixed failing tests in test_yahoo, test_google where symbols were not retrieved but were being accessed (GH3982, GH3985, GH4028, GH4054)
Series.hist will now take the figure from the current environment if one is not passed
Series.hist
Fixed bug where a 1xN DataFrame would barf on a 1xN mask (GH4071)
Fixed running of tox under python3 where the pickle import was getting rewritten in an incompatible way (GH4062, GH4063)
tox
Fixed bug where sharex and sharey were not being passed to grouped_hist (GH4089)
Fixed bug in DataFrame.replace where a nested dict wasn’t being iterated over when regex=False (GH4115)
Fixed bug in the parsing of microseconds when using the format argument in to_datetime (GH4152)
format
to_datetime
Fixed bug in PandasAutoDateLocator where invert_xaxis triggered incorrectly MilliSecondLocator (GH3990)
PandasAutoDateLocator
invert_xaxis
MilliSecondLocator
Fixed bug in plotting that wasn’t raising on invalid colormap for matplotlib 1.1.1 (GH4215)
Fixed the legend displaying in DataFrame.plot(kind='kde') (GH4216)
DataFrame.plot(kind='kde')
Fixed bug where Index slices weren’t carrying the name attribute (GH4226)
Fixed bug in initializing DatetimeIndex with an array of strings in a certain time zone (GH4229)
DatetimeIndex
Fixed bug where html5lib wasn’t being properly skipped (GH4265)
Fixed bug where get_data_famafrench wasn’t using the correct file edges (GH4281)
See the full release notes or issue tracker on GitHub for a complete list.
A total of 50 people contributed patches to this release. People with a “+” by their names contributed a patch for the first time.
Andy Hayden
Chang She
Christopher Whelan
Damien Garaud
Dan Allan
Dan Birken
Dieter Vandenbussche
Dražen Lučanin
Gábor Lipták +
Jeff Mellen +
Jeff Tratner +
Jeffrey Tratner +
Jonathan deWerd +
Joris Van den Bossche +
Juraj Niznan +
Karmel Allison
Kelsey Jordahl
Kevin Stone +
Kieran O’Mahony
Kyle Meyer +
Mike Kelly +
PKEuS +
Patrick O’Brien +
Phillip Cloud
Richard Höchenberger +
Skipper Seabold
SleepingPills +
Tobias Brandt
Tom Farnbauer +
TomAugspurger +
Trent Hauck +
Wes McKinney
Wouter Overmeire
Yaroslav Halchenko
conmai +
danielballan +
davidshinn +
dieterv77
duozhang +
ejnens +
gliptak +
jniznan +
jreback
lexual
nipunreddevil +
ogiaquino +
stonebig +
tim smith +
timmie
y-p