PERF: Memory duplication spotted when dropping columns/axis from a dataframe with `inplace=True`

This issue has been tracked since 2022-09-21.

Pandas version checks

  • I have checked that this issue has not already been reported.

  • I have confirmed this issue exists on the latest version of pandas.

  • I have confirmed this issue exists on the main branch of pandas.

Reproducible Example

Pandas Memory duplication

Hello!

First of all Thanks for this amazing work and library!

We spotted a major memory duplication when dropping a column from a Pandas dataframe.
(we tested and noticed the issue under pandas 1.4.3, 1.4.4, 1.5.0)

here is the code example to reproduce the problem

import numpy as np
import pandas as pd

from memory_profiler import profile

@profile
def pouet():
  df = pd.DataFrame(np.random.randint(0,1000000,size=(1000000, 6)), columns=list('ABCDEF'))
  df.drop(columns=['A'], inplace=True)

if __name__ == '__main__':
  pouet()

After running the script you should expect a similar output

Filename: /tmp/toto.py

Line #    Mem usage    Increment  Occurrences   Line Contents
=============================================================
     6     54.1 MiB     54.1 MiB           1   @profile
     7                                         def pouet():
     8    100.0 MiB     46.0 MiB           1     df = pd.DataFrame(np.random.randint(0,1000000,size=(1000000, 6)), columns=list('ABCDEF'))
     9    138.4 MiB     38.4 MiB           1     df.drop(columns=['A'], inplace=True)

As you can see, after the drop, the memory went from 148MB to 186MB, which corresponds to the new data frame size (here 38MB).

After some research in the core/internals sources, we noticed that the duplication occurs in reindex_indexer when calling the _slice_take_blocks_ax0 function on line 739 , which is in core/internals/managers.py.

Memory duplication also occurs when dropping a column that is not on the 0 axis.

Forcing the only_slice parameter of the _slice_take_blocks_ax0 function to True seems to solve the duplication problem when dropping a column on axis 0.

here is an example

if axis == 0:
	new_blocks = self._slice_take_blocks_ax0(
	    indexer,
	    fill_value=fill_value,
	    only_slice=True, #<======= HERE
	    use_na_proxy=use_na_proxy,
	)

by running the same script again, you should expect similar output

Filename: /tmp/toto.py

Line #    Mem usage    Increment  Occurrences   Line Contents
=============================================================
     6     51.4 MiB     51.4 MiB           1   @profile
     7                                         def pouet():
     8     97.4 MiB     46.0 MiB           1     df = pd.DataFrame(np.random.randint(0,1000000,size=(1000000, 6)), columns=list('ABCDEF'))
     9     97.6 MiB      0.2 MiB           1     df.drop(columns=['A'], inplace=True)

The data framework has been updated and the memory usage is still stable and acceptable.

For small data frames this is not a big problem, but in our use cases we are dealing with "big" data frames (about 8gb to 10-20gb), which can lead to random OOM kills (out of memory) when dropping columns on these data frames.

Are you aware of this problem, and have you planned a solution? If not, we are willing to contribute and solve the problem if you can provide us with more context and use cases related to the Drop function, to make sure we don't break features using the same functions.

The doc should also inform the user about the behavior of the drop function, the inplace parameters could lead to confusion, indeed the df is updated but it is also duplicated and the operations are done on the duplicated one, and inform the user about this memory duplication problem.

Installed Versions

INSTALLED VERSIONS

commit : 87cfe4e
python : 3.10.6.final.0
python-bits : 64
OS : Darwin
OS-release : 21.6.0
Version : Darwin Kernel Version 21.6.0: Wed Aug 10 14:25:27 PDT 2022; root:xnu-8020.141.5~2/RELEASE_X86_64
machine : x86_64
processor : i386
byteorder : little
LC_ALL : None
LANG : None
LOCALE : None.UTF-8

pandas : 1.5.0
numpy : 1.23.3
pytz : 2022.2.1
dateutil : 2.8.2
setuptools : 63.2.0
pip : 22.2.1
Cython : None
pytest : None
hypothesis : None
sphinx : None
blosc : None
feather : None
xlsxwriter : None
lxml.etree : None
html5lib : 1.1
pymysql : None
psycopg2 : None
jinja2 : None
IPython : None
pandas_datareader: None
bs4 : None
bottleneck : None
brotli : None
fastparquet : None
fsspec : None
gcsfs : None
matplotlib : None
numba : None
numexpr : None
odfpy : None
openpyxl : None
pandas_gbq : None
pyarrow : None
pyreadstat : None
pyxlsb : None
s3fs : None
scipy : None
snappy : None
sqlalchemy : None
tables : None
tabulate : None
xarray : None
xlrd : None
xlwt : None
zstandard : None
tzdata : None

Prior Performance

No response

mroeschke wrote this answer on 2022-09-21

Thanks for the report. The library documentation hasn't done a great job describing the sentiment that inplace is generally discouraged and will be deprecated in the near future: #16529

ArcRiiad wrote this answer on 2022-09-21

@mroeschke Thanks for the reply, indeed specifying this part in the doc is a great idea.

However, having a parameter or another drop function that does not duplicate the dataframe would also be very useful when dealing with large dataframes. Do you have any plans for this use case?

phofl wrote this answer on 2022-09-21

We are currently working on a CopyOnWrite mechanism that would allow us to make these operations return shallow copies, e.g. avoiding the duplication

jbrockmendel wrote this answer on 2022-09-21

I tried adding a copy keyword to DataFrame.drop #47993 but couldn't get it passing on the CI (this was before we decided not to add copy keywords in 1.5)

ArcRiiad wrote this answer on 2022-09-22

@phofl Good news! do you have an ETA? or an open PR where this topic is discussed?

Don't hesitate to ping me, we can test the new mechanism, and give you feedback

ArcRiiad wrote this answer on 2022-09-22

I tried adding a copy keyword to DataFrame.drop #47993 but couldn't get it passing on the CI (this was before we decided not to add copy keywords in 1.5)

Indeed, a copy params should also do the work. From what I understand you guys prefer implementing the CopyOnWrite, right?

phofl wrote this answer on 2022-09-24

#46958 implemented the initial mechanism. The linked issues and discussions should explain most things. This is already in 1.5, but we are still missing documentation for that. So theoretically, you could already try it out, but we are still missing the possible optimisations in other parts of the api

ArcRiiad wrote this answer on 2022-09-26

@phofl Indeed running the same test under pandas 1.5.0 with pd.options.mode.copy_on_write set to True seems to solve the duplication issue when dropping a column

from copy import copy
import numpy as np
import pandas as pd

from memory_profiler import profile

pd.options.mode.copy_on_write=True

@profile
def pouet():
  df = pd.DataFrame(np.random.randint(0,1000000,size=(1000000, 6)), columns=list('ABCDEF'))
  df = df.drop(columns=['A'])

if __name__ == '__main__':
  pouet()

output:

Line #    Mem usage    Increment  Occurrences   Line Contents
=============================================================
     9     96.4 MiB     96.4 MiB           1   @profile
    10                                         def pouet():
    11    142.5 MiB     46.1 MiB           1     df = pd.DataFrame(np.random.randint(0,1000000,size=(1000000, 6)), columns=list('ABCDEF'))
    12    142.8 MiB      0.3 MiB           1     df = df.drop(columns=['A'])
ArcRiiad wrote this answer on 2022-09-26

However, we discover that some functions are also impacted by memory duplication (infer_objects and where seem to have the same behavior).

The test scenario

from copy import copy
import numpy as np
import pandas as pd

from memory_profiler import profile

pd.options.mode.copy_on_write=True

@profile
def pouet():
  df = pd.DataFrame(np.random.randint(0,1000000,size=(1000000, 6)), columns=list('ABCDEF'))
  df = df.drop(columns=['A'])
  df = df.infer_objects()
  df = df.where((pd.notnull(df)), pd.NA)


if __name__ == '__main__':
  pouet()

Output:

Line #    Mem usage    Increment  Occurrences   Line Contents
=============================================================
     9     98.3 MiB     98.3 MiB           1   @profile
    10                                         def pouet():
    11    144.4 MiB     46.1 MiB           1     df = pd.DataFrame(np.random.randint(0,1000000,size=(1000000, 6)), columns=list('ABCDEF'))
    12    144.7 MiB      0.3 MiB           1     df = df.drop(columns=['A'])
    13    182.8 MiB     38.2 MiB           1     df = df.infer_objects()
    14    197.3 MiB     14.4 MiB           1     df = df.where((pd.notnull(df)), pd.NA)

We manage to avoid duplicating the memory of the df.infer_objects method by setting the copy parameters of the self._mgr.convert method to False.

    def infer_objects(self: NDFrameT) -> NDFrameT:
        return self._constructor(
            self._mgr.convert(datetime=True, numeric=False, timedelta=True, copy=False) #<==== Here
        ).__finalize__(self, method="infer_objects")

output after the modification

Line #    Mem usage    Increment  Occurrences   Line Contents
=============================================================
     9    106.7 MiB    106.7 MiB           1   @profile
    10                                         def pouet():
    11    152.8 MiB     46.1 MiB           1     df = pd.DataFrame(np.random.randint(0,1000000,size=(1000000, 6)), columns=list('ABCDEF'))
    12    153.1 MiB      0.3 MiB           1     df = df.drop(columns=['A'])
    13    153.1 MiB      0.0 MiB           1     df = df.infer_objects()
    14    205.7 MiB     52.6 MiB           1     df = df.where((pd.notnull(df)), pd.NA)

Do you guys thinks passing a copy params to the infer_objects could be a solution? we can create the pull request for it if needed

and last question, are these methods (infer_objects and where) concerned by the CoW? otherwise it would be a good idea to take it into account, WDYT?

phofl wrote this answer on 2022-09-26

As I said, we don't have all the optimisations yet. You can contribute by adding the copy on write optimisations to other parts of the api. We have issues about this

More Details About Repo
Owner Name pandas-dev
Repo Name pandas
Full Name pandas-dev/pandas
Language Python
Created Date 2010-08-24
Updated Date 2022-10-04
Star Count 35430
Watcher Count 1120
Fork Count 15089
Issue Count 3589

YOU MAY BE INTERESTED

Issue Title Created Date Updated Date