Pandas Data Cleaning Cheat Sheet



In this cheat sheet, we summarize common and useful functionality from Pandas, NumPy, and Scikit-Learn. To see the most up-to-date full version, visit the online cheatsheet at elitedatascience.com. The Pandas cheat sheet will guide you through some more advanced indexing techniques, DataFrame iteration, handling missing values or duplicate data, grouping and combining data, data functionality, and data visualization. In short, everything that you need to complete your data manipulation with Python! Don't miss out on our other cheat sheets for data science that cover Matplotlib, SciPy, Numpy,. Regular expression character sets denoted by a pair of brackets will match any of the characters included within the brackets. For example, the regular expression conscenscus will match any of the spellings consensus, concensus, consencus, and concencus.

Are you looking for examples of using Python for data analysis? This article is for you. We will show you how to accomplish the most common data analysis tasks with Python, from the features of Python itself to using modules like Pandas to a simple machine learning example with TensorFlow. Let’s dive in.

Sql To Pandas Cheat Sheet

A Note About Python Versions

All examples in this cheat sheet use Python 3. We recommend using the latest stable version of Python, for example, Python 3.8. You can check which version you have installed on your machine by running the following command in the system shell:

Sometimes, a development machine will have Python 2 and Python 3 installed side by side. Having two Python versions available is common on macOS. If that is the case for you, you can use the python3 command to run Python 3 even if Python 2 is the default in your environment:
If you don’t have Python 3 installed yet, visit the Python Downloads page for instructions on installing it.


Launch a Python interpreter by running the python3 command in your shell:

Libraries and Imports

The easiest way to install Python modules that are needed for data analysis is to use pip. Installing NumPy and Pandas takes only a few seconds:
Once you’ve installed the modules, use the import statement to make the modules available in your program:

Getting Help With Python Data Analysis Functions

If you get stuck, the built-in Python docs are a great place to check for tips and ways to solve the problem. The Python help() function displays the help article for a method or a class:
The help function uses the system text pagination program, also known as the pager, to display the documentation. Many systems use less as the default text pager, just in case you aren’t familiar with the Vi shortcuts here are the basics:

  • j and k navigate up and down line by line.
  • / searches for content in a documentation page.
    • After pressing / type in the search query, press Enter to go to the first occurrence.
    • Press n and N to go forward and back through the search results.
  • Ctrl+d and Ctrl+u move the cursor one page down and one page up, respectively.

Another useful place to check out for help articles is the online documentation for Python data analysis modules like Pandas and NumPy. For example, the Pandas user guides cover all the Pandas functionality with explanations and examples.

Basic language features

A quick tour through the Python basics:


There are many more useful string methods in Python, find out more about them in the Python string docs.

Working with data sources

Pandas provides a number of easy-to-use data import methods, including CSV and TSV import, copying from the system clipboard, and reading and writing JSON files. This is sufficient for most Python data analysis tasks:
Find all other Pandas data import functions in the Pandas docs.

Working with Pandas Data Frames

Pandas data frames are a great way to explore, clean, tweak, and filter your data sets while doing data analysis in Python. This section covers a few of the things you can do with your Pandas data frames.

Exploring data

Here are a few functions that allow you to easily know more about the data set you are working on:

Statistical operations

All standard statistical operations like minimums, maximums, and custom quantiles are present in Pandas:

Cleaning the Data

It is quite common to have not-a-number (NaN) values in your data set. To be able to operate on a data set with statistical methods, you’ll first need to clean up the data. The fillna and dropna Pandas functions are a convenient way to replace the NaN values with something more representative for your data set, for example, a zero, or to remove the rows with NaN values from the data frame.


Filtering and sorting

Here are some basic commands for filtering and sorting the data in your data frames.

Machine Learning

While machine learning algorithms can be incredibly complex, Python’s popular modules make creating a machine learning program straightforward. Below is an example of a simple ML algorithm that uses Python and its data analysis and machine learning modules, namely NumPy, TensorFlow, Keras, and SciKit-Learn.

In this program, we generate a sample data set with pizza diameters and their respective prices, train the model on this data set, and then use the model to predict the price of a pizza of a diameter that we choose.

Once the model is set up we can use it to predict a result:

Summary

In this article, we’ve taken a look at the basics of using Python for data analysis. For more details on the functionality available in Pandas, visit the Pandas user guides. For more powerful math with NumPy (it can be used together with Pandas), check out the NumPy getting started guide.


To learn more about Python for data analysis, enroll in our Data Analysis Nanodegree program today.

Pandas is arguably the most important Python package for data science. Not only does it give you lots of methods and functions that make working with data easier, but it has been optimized for speed which gives you a significant advantage compared with working with numeric data using Python’s built-in functions.

It’s common when first learning pandas to have trouble remembering all the functions and methods that you need, and while at Dataquest we advocate getting used to consulting the pandas documentation, sometimes it’s nice to have a handy reference, so we’ve put together this cheat sheet to help you out!

If you’re interested in learning pandas, you can consult our two-part pandas tutorial blog post, or you can signup for free and start learning pandas through our interactive pandas for data science course.

Key and Imports

Pandas Cheat Sheet Pdf

In this cheat sheet, we use the following shorthand:

dfAny pandas DataFrame object
sAny pandas Series object

Pandas Data Cleaning Cheat Sheet Pdf

You’ll also need to perform the following imports to get started:

Importing Data

pd.read_csv(filename)From a CSV file
pd.read_table(filename)From a delimited text file (like TSV)
pd.read_excel(filename)From an Excel file
pd.read_sql(query, connection_object)Read from a SQL table/database
pd.read_json(json_string)Read from a JSON formatted string, URL or file.
pd.read_html(url)Parses an html URL, string or file and extracts tables to a list of dataframes
pd.read_clipboard()Takes the contents of your clipboard and passes it to read_table()
pd.DataFrame(dict)From a dict, keys for columns names, values for data as lists

Exporting Data

Pandas Data Cleaning Cheat Sheet
df.to_csv(filename)Write to a CSV file
df.to_excel(filename)Write to an Excel file
df.to_sql(table_name, connection_object)Write to a SQL table
df.to_json(filename)Write to a file in JSON format

Create Test Objects

Useful for testing code segements

pd.DataFrame(np.random.rand(20,5))5 columns and 20 rows of random floats
pd.Series(my_list)Create a series from an iterable my_list
df.index = pd.date_range('1900/1/30', periods=df.shape[0])Add a date index

Pandas Data Cleaning Cheat Sheet Printable

Viewing/Inspecting Data

df.head(n)First n rows of the DataFrame
df.tail(n)Last n rows of the DataFrame
df.shape()Number of rows and columns
df.info()Index, Datatype and Memory information
df.describe()Summary statistics for numerical columns
s.value_counts(dropna=False)View unique values and counts
df.apply(pd.Series.value_counts)Unique values and counts for all columns

Selection

df[col]Return column with label col as Series
df[[col1, col2]]Return Columns as a new DataFrame
s.iloc[0]Selection by position
s.loc['index_one']Selection by index
df.iloc[0,:]First row
df.iloc[0,0]First element of first column

Data Cleaning

df.columns = ['a','b','c']Rename columns
pd.isnull()Checks for null Values, Returns Boolean Arrray
pd.notnull()Opposite of pd.isnull()
df.dropna()Drop all rows that contain null values
df.dropna(axis=1)Drop all columns that contain null values
df.dropna(axis=1,thresh=n)Drop all rows have have less than n non null values
df.fillna(x)Replace all null values with x
s.fillna(s.mean())Replace all null values with the mean (mean can be replaced with almost any function from the statistics section)
s.astype(float)Convert the datatype of the series to float
s.replace(1,'one')Replace all values equal to 1 with 'one'
s.replace([1,3],['one','three'])Replace all 1 with 'one' and 3 with 'three'
df.rename(columns=lambda x: x + 1)Mass renaming of columns
df.rename(columns={'old_name': 'new_ name'})Selective renaming
df.set_index('column_one')Change the index
df.rename(index=lambda x: x + 1)Mass renaming of index

Pandas Data Science Cheat Sheet

Filter, Sort & Groupby

df[df[col] > 0.5]Rows where the col column is greater than 0.5
df[(df[col] > 0.5) & (1.7)]Rows where 0.7 > col > 0.5
df.sort_values(col1)Sort values by col1 in ascending order
df.sort_values(col2,ascending=False)Sort values by col2 in descending order
df.sort_values([col1,ascending=[True,False])Sort values by col1 in ascending order then col2 in descending order
df.groupby(col)Return a groupby object for values from one column
df.groupby([col1,col2])Return groupby object for values from multiple columns
df.groupby(col1)[col2]Return the mean of the values in col2, grouped by the values in col1 (mean can be replaced with almost any function from the statistics section)
df.pivot_table(index=col1,values=[col2,col3],aggfunc=max)Create a pivot table that groups by col1 and calculates the mean of col2 and col3
df.groupby(col1).agg(np.mean)Find the average across all columns for every unique col1 group
data.apply(np.mean)Apply a function across each column
data.apply(np.max,axis=1)Apply a function across each row

Join/Comine

df1.append(df2)Add the rows in df1 to the end of df2 (columns should be identical)
df.concat([df1, df2],axis=1)Add the columns in df1 to the end of df2 (rows should be identical)
df1.join(df2,on=col1,how='inner')SQL-style join the columns in df1 with the columns on df2 where the rows for col have identical values. how can be one of 'left', 'right', 'outer', 'inner'

Statistics

Pandas Cheat Sheet

These can all be applied to a series as well.

df.describe()Summary statistics for numerical columns
df.mean()Return the mean of all columns
df.corr()Finds the correlation between columns in a DataFrame.
df.count()Counts the number of non-null values in each DataFrame column.
df.max()Finds the highest value in each column.
df.min()Finds the lowest value in each column.
df.median()Finds the median of each column.
df.std()Finds the standard deviation of each column.

Download a printable version of this cheat sheet

If you’d like to download a printable version of this cheat sheet you can do so below.