September 24, 2023

– Advertisement –

Introduction

Often when working on predictive modeling, it is a common observation that most of the time the model has good accuracy for training data and low accuracy for test data. Although this is a common observation for most machine learning problem statements, if the difference between training and test accuracy is large, it means that the model is overfitting the training data.

There can be many reasons for overfitting-

– Advertisement –

The model learns patterns from the random noise in the training data. The training and test data are from different time periods. For example, imagine you are building a model to identify credit card fraud. The training data consists of transactions from 2000 to 2019, and the test data consists of transactions in 2020 and 2021. The training and test data have different feature values. For example, imagine that you are building a model to forecast retail sales. The training data includes transactions in European countries, and the test data includes transactions in Asian countries.

If you create training and test data yourself – you can control and reduce such examples.

– Advertisement –

However, when you participate in a hackathon, you are usually given 2 datasets – training and test datasets. If the hackathon includes supervised learning, you will also have training data labels but not test data.

– Advertisement –

It may happen that the training data are from different time periods or have different feature values. In such situations, if you build a model using the training data and apply it to the test data, you may see an accuracy difference between the datasets.

One would say that we can use cross-validation to prevent such gaps. However, cross-validation will take examples or samples from the training data itself, and this problem will still exist.

Therefore, we need to use some other method to identify such trends. Here comes adversarial verification to help us.

See also  Top analytics trends include cloud, AI/ML and embedded BI Tech Goals

This article was published as a part of the Data Science Blogthon.

Table of Contents What is Adverse Verification? What is adversarial recognition in action?

Adversarial validation is a smart yet simple way to identify similarities between training and test datasets. It uses a simple logic – if a binary classifier model is able to differentiate between training and test samples, it means that there is dissimilarity between training and test data.

This includes basic operations:

Here, we drop the actual target column from the training dataset. Create a label column in both datasets (0 for train data and 1 for test data or vice versa). Then we combine the training and test datasets. Now, we use a binary classifier to see if we are able to differentiate between training and test samples. Now, we evaluate the AUC ROC score, i.e. the area under the curve for the receiver operating characteristic graph.

If the AUC ROC score is ~0.5, it means that the test and training data are similar.
If the AUC ROC score is > 0.5, it means that the test and training data are not similar.

Adversarial Validation in Action Module 1: Identifying whether the test and train datasets are identical.

1. Download the dataset and import the library

We will download the Titanic dataset from here:

https://www.kaggle.com/c/titanic

from sklearn.ensemble as pd import pandas import RandomForestClassifier

2. Load only numerical features

Python code:

,

3. Drop the target feature and create a label column (0 for train data and 1 for test data).

# drop the target column from the training data X_train = X_train.drop([‘Survived’]axis = 1) print(X_train.shape) print(X_test.shape) # add train/test label X_train[“Adv_Val_label”] = 0 x_test[“Adv_Val_label”] = 1

4. Joining and Manipulating Data

# Create a large dataset all_data = pd.concat([X_train, X_test]axis = 0, ignore_index = True) # shuffle all_data = all_data.sample(frac = 1)

5. Train a Random Forest Model

one = RandomForestClassifier(random_state = 42, max_depth = 2, class_weight = “balanced”) x = all_data.drop([‘Adv_Val_label’]axis=1).fill(-1) y = all_data[‘Adv_Val_label’]clf = RandomForest Classifier (random_stat = 42). fit(x,y)

See also  A SaaS Pricing Guide: SaaS Pricing Models, Strategies & Examples

6. Check and Check Rok-Ok

from sklearn.metrics import roc_auc_score auc_score = roc_auc_score(y, clf.predict_proba(X)[:,1]) print (auc_score)

Output: 1.0

Here, ROC score is 1. This means that the model is able to perfectly differentiate between the training and test samples.

Let’s also look at the importance of the feature. This will help to understand which features are driving the predictions.

feature_imp_random_forest = pd.DataFrame({‘feature’:list(X.columns), ‘RF_Score’:list(clf.feature_importances_) }) feature_imp_random_forest = feature_imp_random_forest.sort_values(by=’RF_Score’,ascending=False) feature_imp_random_forest

Looking at the feature importance, we see that ~97.5% of the importance is due to the PassengerID column.

Let us remove that column and train the model again.

7. Delete Column and Retrain

train_data = pd.read_csv(“train.csv”) test_data = pd.read_csv(“test.csv”) # select only numeric features X_test = test_data.select_dtypes(contains =[‘number’].copy() X_train = train_data.select_dtypes(contains =[‘number’].copy() # drop target column from training data X_train = X_train.drop([‘Survived’,’PassengerId’]axis = 1) X_test = X_test.drop([‘PassengerId’]axis=1) # add train/test label X_train[“Adv_Val_label”] = 0 x_test[“Adv_Val_label”] = 1 # Create a large dataset all_data = pd.concat([X_train, X_test]axis = 0, ignore_index = True) # shuffle all_data = all_data.sample (frac = 1) X = all_data.drop ([‘Adv_Val_label’]axis=1).fill(-1) y = all_data[‘Adv_Val_label’]clf = RandomForestClassifier(random_state = 42, max_depth = 2, class_weight = “balanced”). fit(x, y) auc_score = roc_auc_score(y, clf.predict_proba(x)[:,1]) print (auc_score)

Output: 0.6214792797727406

Now for the same hyper-parameter, the ROC score is ~0.62

The score is reduced, which means that it is now harder for the model to distinguish between the training and test datasets. Let’s also look at the importance of the feature. This will help to understand which features are driving the predictions.

feature_imp_random_forest = pd.DataFrame({‘feature’:list(X.columns), ‘RF_Score’:list(clf.feature_importances_) }) feature_imp_random_forest = feature_imp_random_forest.sort_values(by=’RF_Score’,ascending=False) feature_imp_random_forest

Output:

adverse verification

The most important attribute is rent (~34.4% importance), followed by age (~27.2% importance), and so on. Feature importance is not biased unlike the previous run.

Let us now understand how to handle the case when there is a difference between train and test data.

Module 2: How to Build a Better Validation Set – When the Test and Training Datasets Are Different

Here, we will use the same dataset and model created in step 7. Since the ROC score is ~0.62, it means that the test and training data are not identical. So, we need to create a validation set from the original training data which is very similar to the test data. Let us call it adversarial_validation_data

See also  OKX to Power Web3 Innovation as a Sponsor of Consensus 2023-Affiliated Hackathon ‘Web3athon’

Step 1: The model in step 7 can be used to estimate the probability of being a test sample.

# check and identify the most similar examples to test data X_new = X.copy() X_new[‘proba’] = clf.predict_proba(x)[:,1]X_new[‘target’] = Y

Step 2: Extract the original test dataset from this data.

X_new = X_new[X_new[‘target’]== 0]

Step 3: Sort the data in descending order of probability and select the top 20% of the sample. This would mean that we are selecting samples from data that are very similar to the test data. Let’s call it adversarial_validation_data, and the rest of the data will be adversarial_training_data.

nrows = X_new.shape[0]adversarial_validation_data = X_new.sort_values(by=’proba’, ascending=False)[:int(nrows*.2)]adversarial_training_data = X_new.sort_values(by=’proba’, ascending=False)[int(nrows*.2):]

Now, we can train a machine learning model using adversarial_training_data and optimize its accuracy on adversarial_validation_data. The accuracy you get on adversarial_validation_data will be closer to the actual test data.

conclusion

Adversarial validation is a clever and simple way to determine whether our test data and training data are identical; We combine our train and test data, label them with 0 for training data and 1 for test data, mix them, and then see if we can correctly identify them using a binary classifier . in this article,

We looked at how to deal with overfitting and improve leaderboard scores in a hackathon using adversarial validation. We first saw how adversarial validation helps to identify whether the test and train datasets are similar. We also looked at how to build a better validation set when the test and train data are different.

If you want to discuss this with me, feel free to connect with me on LinkedIn.

The media analytics shown in this article is not owned by Vidya and is used at the discretion of the author.

Connected

Source link

– Advertisement –