Data Analysis · Data Mining · NumPy · Pandas · Python · SciKit-Learn

Numpy vs Pandas Performance

Hi guys!

In the last post, I wrote about how to deal with missing values in a dataset. Honestly, that post is related to my PhD project. I will not explain the detail of my project but I need to replace a certain of percentage (10,20,…90 %) of my dataset to NaN then impute all those NaN values. In that post, I did experiment using Boston dataset, a quite small dataset (13 dimensions and 506 rows/tuples).

I got problems when I did experiments with a bigger dataset, for instance, Flights dataset. This dataset consists of 11 dimensions and almost one million rows (tuples) which is a quite large number. Let see Figure below:

To replace 80% values to NaN in Flight dataset using Pandas operation, it takes around 469 seconds. It’s really slow. Moreover, in this case, I only work on 8 dimensions (only numerical attributes).

I guess there are some reasons, why it has a slow performance: 1) because of the code itself; 2) due to using Pandas for large number operations, or 3) due to both reasons.

I was trying to find the answer and I found two posts about comparison performance between Numpy and Pandas including when we should use Numpy and Pandas: ([1], [2])

After reading those posts, I decided to use Numpy instead of Pandas in my operation due to my dataset has a large number of tuples (almost one million tuples).

This is how I implement. This function below is a function for replacing values to NaN:

 
def dropout(a, percent):
    # create a copy
    mat = a.copy()
    # number of values to replace
    prop = int(mat.size * percent)
    # indices to mask
    mask = random.sample(range(mat.size), prop)
    # replace with NaN
    np.put(mat, mask, [np.NaN]*len(mask))
    return mat

The code below is for missing values imputation. The code below is based on scikit learn example (scikit-learn has a function for imputing missing values). I imputed all numerical missing values with mean and all categorical missing values with the most frequent values:

from sklearn.base import TransformerMixin

class DataFrameImputer(TransformerMixin):
    def __init__(self):
        """Impute missing values.
        Columns of dtype object are imputed with the most frequent value 
        in column.
        Columns of other types are imputed with mean of column.
        """
    def fit(self, X, y=None):
        self.fill = pd.Series([X[c].value_counts().index[0]
            if X[c].dtype == np.dtype('O') else X[c].mean() for c in X],
            index=X.columns)
        return self
    def transform(self, X, y=None):
        return X.fillna(self.fill)

Here the results:

In Figure above, it can be seen that I converted Pandas data frame to numpy array. Just use this command

data = df.values

, your data frame will be converted to numpy array. Then I run the dropout function when all data in the form of numpy array. In the end, I re-converted again the data to Pandas dataframe after the operations finished.

Using Numpy operation to replace 80% data to NaN including imputing all NaN with most frequent values only takes 4 seconds. Moreover, in this case, I work on 11 dimensions (categorical and numerical attributes).

From this post, I just want to share to you that your choice matters. When we want to deal with a large number of tuples, we may consider choosing numpy instead of pandas. However, another important thing is no one can write optimized code!!

See you again in the next post!

Data Analysis · Data Mining · NumPy · Pandas · SciKit-Learn

Remove Duplicates from Correlation Matrix Python

Correlation is one of the most important things that usually used by the data analysts in their analytical workflow. By using correlation, we can understand the mutual relationship or association between two attributes. Let’s start with the example. For instance, I want to do an analysis of the “Boston housing dataset”, let see the example code below. If you are not familiar with Jupyter Notebook, Pandas, Numpy, and other python libraries, I have a couple of old posts that may useful for you: 1) setup anaconda 2) understand python libraries for data science.


import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn import datasets

boston = datasets.load_boston() #Load Boston Housing dataset, this dataset is available on Scikit-learn
boston = pd.DataFrame(boston['data'], columns=boston['feature_names'])
)

We can use the command boston.head() to see the data, and boston.shape to see the dimension of the data. We can easily use this command below to get correlation value among all attributes in Boston housing dataset. (e.g., in this experiment I used Pearson correlation).


dataCorr = boston.corr(method='pearson')
dataCorr

After using this command, we will see the matrix of correlation like in Figure below:

The question is, how to remove duplicates from this matrix of correlation to make it more readable? I found a nice answer on stackoverflow, we can use this command:


dataCorr = dataCorr[abs(dataCorr) >= 0.01].stack().reset_index()
dataCorr = dataCorr[dataCorr['level_0'].astype(str)!=dataCorr['level_1'].astype(str)]

# filtering out lower/upper triangular duplicates 
dataCorr['ordered-cols'] = dataCorr.apply(lambda x: '-'.join(sorted([x['level_0'],x['level_1']])),axis=1)
dataCorr = dataCorr.drop_duplicates(['ordered-cols'])
dataCorr.drop(['ordered-cols'], axis=1, inplace=True)

dataCorr.sort_values(by=[0], ascending=False).head(10) #Get 10 highest correlation of pairwaise attributes

Finally, we get the table that consists of the pair of attributes and the correlation values, and the most important thing is we do not have any duplication.

I also found another way from a guy in Github who create a nice function to remove this duplication. Please see through this link.

Thank you and see you next time.

Data Analysis · NumPy · Pandas

Python Pandas DataFrame Basics Tutorial

In this post, I am going to show you how to deal with data in Python. Before going there, you have to understand what kind of python libraries that you need to know if you want to deal with data in Python. Python has tons of libraries especially related to data science. I have a couple of old posts that may useful for you: 1) setup anaconda 2) understand python libraries for data science. In this tutorial, I use Jupyter Notebook, if you did not have/familiar yet, please read the instruction above, otherwise, just go down!

Let start from the simplest one. When you want to deal with data in Python. Python has an amazing library called Pandas. If you are familiar with Spreadsheet tool such as MS Excel, Pandas similar to that kind of tool, Pandas shows our data in the format of Table. The only difference is, when you use Excel you just drag and drop but here in Pandas, you have to understand the standard syntax and command of pandas.

Let’s get started.

To start analyzing data, you can import your data (e.g., csv, xls, and etc) to python environment using Pandas: import pandas as pd then pd.read_csv('data.csv'). However, to make it easier, in this tutorial, I just create my data rather than import from file.


#Create DataFrame
import pandas as pd #when we want to use Pandas, we have to import it
import numpy as np #numpy is another useful library in python for dealing with number
df = pd.DataFrame(
    {'integer':[1,2,3,6,7,23,8,3],
     'float':[2,3.4,5,6,2,4.7,4,8],
     'string':['saya','aku', np.NaN ,'cinta','kamu','a','b','indonesia']}
)

1. To show your DataFrame, just use this command!

#show data in DataFrame
df

2. If you want to access your single or more data from your DataFrame, you can access it using loc syntax.

#Show data based on index
df.loc[1]

3. If you only need some columns and ignore other columns, you can just select the columns:

#show data based on columns selected
df[['string','float']]

4. You also can apply IF condition on your data similar to the filter in Excel. Use this command and see your result.

#show data with condition
df[df['float']>4]

5. You also able to rename the columns by using this command:

#rename column in DataFrame
df.rename(columns={'string':'char'})

6. When I create the data, I add one row that contains Nan or null value. The missing value is a common issue in data. So, how to deal with missing values? first, we need to know whether our DataFrame contains missing values or not by using this command.

#Show NaN value in DataFrame
df.isnull()

7. The simplest way to deal with missing values is to drop all missing values. How to drop missing values? here the command:

# Drop all rows contain NaN value
df.dropna()

8. We also can make summaries from our data (e.g., mean, median, mode, max, etc), use this command and see what you got!

#Show mean, median, and maximum in Data Frame
mean = df['float'].mean()
print("mean =", mean)
median = df['float'].median()
print("median = ", median)
max = df['float'].max()
print("max =", max)

Here the result: https://github.com/rischanlab/PyDataScience.Org/blob/master/python_notebook/1%20Basic%20Pandas%20Operation%20.ipynb

 

 

Machine Learning · Matplotlib · NumPy · Pandas · SciKit-Learn

Linear Regression using Python

Whoever wants to learn machine learning or become a data scientist, the most obvious thing to learn first time is linear regression. Linear regression is the simplest machine learning algorithm and it is generally used for forecasting. The goal of linear regression is to find a relationship between one or more independent variables and a dependent variable by fitting the best line. This best fit line is known as regression line and defined by a linear equation Y= a *X + b.

For instance, in the case of the height of children vs their age. After collecting the data of children height and their age in months, we can plot the data in a scatter plot such as in Figure below.

 

Linear regression will find the relationship between age as the independent variable and height as the dependent variable. Linear regression will find the best fit line from all points on the scatter plot. Finally, it can be used as a prediction, for instance, to predict what height the children when his age enter 35 months?

 

How to implement this linear regression in Python?

First, to make easier, I will generate a random dataset for our experiment.


import pandas as pd
import numpy as np

np.random.seed(0)
x = np.random.rand(100, 1) #generate random number for x variable
y = 2 + 3 * x + np.random.rand(100, 1) # generate random number of y variable
x[:10], y[:10] #Show the first 10 rows of each x and y

 

There are many ways to build a regression model, we can build it from scratch or just use the library from Python. In this example, I use scikit-learn


from sklearn.linear_model import LinearRegression
from sklearn.metrics import mean_squared_error, r2_score
from matplotlib import pyplot as plt

# Initialize the model
model = LinearRegression()
# Train the model - fit the data to the model
model.fit(x, y)
# Predict
y_predicted = model.predict(x)

# model evaluation
rmse = mean_squared_error(y, y_predicted)
r2 = r2_score(y, y_predicted)

# printing values
print('Slope:' ,model.coef_)
print('Intercept:', model.intercept_)
print('Root mean squared error: ', rmse)
print('R2 score: ', r2)

# plotting values
plt.scatter(x, y, s=5)
plt.xlabel('x')
plt.ylabel('y')

# predicted values
plt.plot(x, y_predicted, color='r')
plt.show()

Tarraaa!, it’s easy right?

See you next time

Matplotlib · NumPy · Pandas · Plotting in Python

Read the data and plotting with multiple markers

Let’s assume that we have an excel data and we want to plot it on a line chart with different markers. Why markers? just imagine, we have plotted a line chart with multiple lines using a different colour, but we only have black and white ink, after printing, all lines will be in black colour. That’s why we need markers.

 

For instance, our data can be seen in Table above, this is just a dummy data that tells about algorithms performance vs. the number of k. We want to plot this data to the line chart.  We already have the previous experiment, how to plot the line chart with multiple lines and multiple styles. However, in the previous experiment, we used static declaration for each line. It will be hard if we have to declare one by one for each line.

Let’s get started

The first step is to load our Excel data to the DataFrame in pandas.


import pandas as pd
import numpy as np

import matplotlib as mpl

xl = pd.ExcelFile("Experiment_results.xlsx")

df = xl.parse("Sheet2", header=1, index_col=0)
df.head()

It’s very easy to load Excel data to DataFrame, we can use some parameters which very useful such as sheet name, header, an index column. In this experiment, I use “Sheet2″ due to my data in the Sheet2, and I use ”1″ as the header parameter which means I want to load the header to the DataFrame, and if you don’t want to load it just fill it with ”0″. I also use index_col equal to “0”, which means I want to use the first column in my Excel dataset as the index in my DataFrame. Now we have a dataframe that can be seen in the Table above.

The second step is how to set the markers. As I said in the previous experiment that matplotlib supports a lot of markers. Of course, I don’t want to define one by one manually. Let see the code below:


# create valid markers from mpl.markers
valid_markers = ([item[0] for item in mpl.markers.MarkerStyle.markers.items() if
item[1] is not 'nothing' and not item[1].startswith('tick') and not item[1].startswith('caret')])

# valid_markers = mpl.markers.MarkerStyle.filled_markers

markers = np.random.choice(valid_markers, df.shape[1], replace=False)

Now, we have a list of markers inside the ‘markers’ variable. We need to select the markers randomly which are defined by df.shape[1] (the number of columns). Let start to plot the data.


ax = df.plot(kind='line')
for i, line in enumerate(ax.get_lines()):
line.set_marker(markers[i])

# adding legend
ax.legend(ax.get_lines(), df.columns, loc='best')
plt.show()

Taraaaa!!!!, it’s easy, right?

 

 

The next question is how to plot Figure like below?

plotting-multiplelines-with multiple styles

Check it out here. 

Bokeh · Data Analysis · Data Mining · Keras · Machine Learning · Matplotlib · NumPy · Pandas · Plotting in Python · Ploty · SciKit-Learn · SciPy · Seaborn

Python for Data Science

I have been two years doing processing and manipulating data using R and mostly I use this language for my research project. I only heard and never tried Python for my work before. But now, after I use Python, I really fall in love with this language. Python is very simple and it is been known that this language is the easiest one to be learned. The reason why previously I used R was this language is supported by tons of libraries for scientific analysis and all of those are open source. Now, with the popularity of Python, I can find easily all libraries that I need in Python and all of them open source as well.

There are core libraries that you must know when you start to do data analytics using Python:

  1. NumPy, it stands for Numerical Python. Python is different with R, the purpose of R language is for scientist. On the other side, Python is just general programming language. That’s why Python needs a library to handle numerical things such as complex arrays and matrics. Repo project link: https://github.com/numpy/numpy
  2. SciPy, this library is for scientific and it handles such as statistic computing, linear algebra, optimation etc. Repo project link: https://github.com/scipy/scipy
  3. Pandas, if you have experiences with R, it is very similar to DataFrame. Using DataFrame, we can easily manipulate, aggregate, and doing analysis on our dataset. The data will be shown in a table similar to Excel Spreadsheet or DataFrame in R and it convenient to access the data by columns, rows or else. Repo project link: https://github.com/pandas-dev/pandas
  4. Matplotlib, Plotting is very important for data analysis. Why we need plotting? the simple answer is to make anyone easier and we know that one picture can descript 1000 words. To generate visualization from dataset, we absolutely need data visualization tools. If you have experiences with Excel, it is very easy, just block the table that you want to plot and select the plotting types such as Bar chart, line chart, etc. In R, the most popular tools for plotting is ggplot, basically, you can use standard library ‘plot’ in R but if you want more advanced and more beautiful figure you need to use ggplot.  How about in Python? Matplotlib is the basic library for visualization in Python, Repo project link: https://github.com/matplotlib/matplotlib

Those are the core basic libraries that you need when you start to use Python for data analytics. There are tons of Python libraries out there, here some of them that may useful for you:

  1. SciKit-Learn, when you want to apply machine learning, you have to understand this.
  2. Scrapy, to scrap the data from the Web, when you want to gather the data from websites for your analysis. For instance, collecting tweets data from Twitter.
  3. NLTK, if you want to do natural language processing.
  4. Theano, Tensorflow, Keras, when you are not satisfied with NumPy performance or want to apply neural network algorithms or doing deep learning stuff, you have to understand these libraries.
  5. Interactive Visualization Tools, matplotlib is basic plotting tool and it is enough for me as researcher especially for publications, but when we want a dynamic plotting or more interactive, we can use Seaborn, Ploty, or Bokeh.

pythonenvironment

If you do not want to think too much about how to install all of those libraries, just try to use Anaconda, it is really cool. 

See ya next time

Brisbane, 24 November 2017

Bokeh · Data Mining · NumPy · Pandas · Plotting in Python

Python for Data Science Cheat Sheet

Berikut adalah Python for Data Science Cheat Sheet yang cukup membantu untuk merefresh ingatan kita atau bagi yang baru awal menggunakan Python untuk analisis data, mining data atau data science bisa dijadikan bahan bacaan.

 

Python Basic for Data Science

Berikut Cheat Sheet nya:
Python basic

Untuk file PDF dengan kualitas bagus bisa didownload di sini

 

Python NumPy Cheat Sheet

Berikut Cheat Sheet gambarnya:

Numpy Basic

Untuk file PDF dengan kualitas bagus bisa didownload di sini

 

Python Pandas Cheat Sheet

Berikut Cheat Sheet gambarnya:

Pandas Basic

Untuk file PDF dengan kualitas bagus bisa didownload di sini

 

Python Bokeh Interactive Visualization Cheat Sheet

Berikut Cheat Sheet gambarnya:

Bokeh

Untuk file PDF dengan kualitas bagus bisa didownload di sini

 

Cheat Sheet di atas saya dapatkan dari DataCamp.

Semoga bermanfaat.