Ramgopal Prajapat:

Learnings and Views

Reducing RTO Orders for Your Ecommerce using AI/ML

By: Ram on Jan 09, 2023

Context:

Return to origin or RTO is a common term used in e-commerce. A delivery is marked as RTO by delivery partner when the order could not be delivered due to issue with the delivery address, or the buyer is not responding.

 

For some e-commerce market players, this RTO’s can go upto 30% if not kept in check” - source

% RTO is ratio of orders that could not delivered by total number of orders, and it is very important metric for Ecommerce – both marketplaces like Flipkart, Amazon or D2C businesses.

RTO % increases cost and hit the bottom line for the ecommerce companies and some of the cost components are:

  • Blocked inventory and lost in transit cost
  • Forward and reverse shipping cost
  • Operational Costs – order processing, quality checks when returned, reverse order processing etc.
  • Damaged or broken products – some of the products may incur these costs

 

The products or orders returned by the customer post delivered are NOT considered in the RTO the below scenario.

There are two strong indicators of RTO before order is processed for delivery

  • Customers – Repeat offenders or Abusers
  • Address – Incomplete or Gibberish

In this blog, we will discuss on the steps to develop NLP Based Model to Predict or Identify Addresses that are incomplete or Gibberish.

 

Overall Approach

  • Prepare Data – Take all the addresses that have been validated as Gibberish or Incomplete and take similar count of addresses that are genuine, and orders were delivered for those addresses.  
  • Encoding of Addresses: The deep learning models require representation textual data into numerical format. The text embedding converts text (words or sentences) into a numerical vector. Pre-trained Universal Sentence Encoder model is used to encodes text into high dimensional vectors.
  • Deep Learning Model Definition & Training:  In this scenario, the input text (address) to be classified into two categories (Binary Text Classification).  The deep learning model architecture is s defined and trained on the input text/address dataset.
  • Deep Learning Text Classification Model Performance: The model performance is evaluated on the text sample, and the full address list for a month – across all the labelled data to assess the performance.

 

Hands on Text Classification Model for Categorising address as Gibberish or Genuine

 

  1. List of all the required Python Libraries

  1. Read Data

Read the data prepared

Graphical user interface, text

Description automatically generated

  1. Universal Sentence Encoder Model

 The Universal Sentence Encoder encodes text into high dimensional vectors. These vectors will be used as input for text classification model. The pre-trained Universal Sentence Encoder is publicly available in Tensorflow-hub and the same is used for the encoding addresses.

Shape

Description automatically generated

Text

Description automatically generated with medium confidence

 

  1. Label Encoder: Encoding labels – whether delivery address is genuine or note using Label Encoder available in the sklearn package.

 

Text

Description automatically generated

 

  1. Training and Test Samples

Splitting input data into train (used for developing the model) and test (used for validating the model)

30% random addresses used for test sample and raining 70% for training the model.

  1. Model Definition and Fitting the model

Multiple deep learning layers from Kera is used for defining architecture of the Text Classification Model.  Lemda layer is used for creating custom embedding based on the universal encoder model.

 

Input to the model is text and the output has 2 categories – whether address is Genuine or Gibberish.

Table

Description automatically generated

 

Fitting the model

Graphical user interface, text, application, Word

Description automatically generated

 

  1. Model Performance

Graphical user interface, text, application, Word

Description automatically generated

Decoding the output and Confusion Matric is created.

Graphical user interface, text, application

Description automatically generated

Chart, treemap chart

Description automatically generated

The model is 91% accurate in flagging Incomplete/Gibberish orders as Incomplete/Gibberish accurately.

We tried randomly with a few addresses and looks reasonable.

Graphical user interface, text, application

Description automatically generated

 

There are multiple other approaches or models can be used for improving the model performance.

Leave a comment