Advanced Matching Capabilities to Improve Item Setup


Merchant Technology

Published Date:



Sunnyvale, CA



  • Java
  • Scala
  • Scalatra
  • Akka
  • Python
  • Tensorflow
  • Cassandra
  • OneOps
  • CI/CD Looper Pipeline
  • Logstash
  • Elastic Search
  • Kibana
  • Kafka

Overview is a marketplace with hundreds of millions of products and thousands of sellers. Given a seller who wishes to set up a new product offer on, we need to determine whether we already carry this product in our catalog. If we do, we add this seller’s offer to an existing item page and merge the offer to a product page. If we don’t make this determination, then when customers search for a product on our website, they may end up seeing millions of duplicates which results in the inability to view all purchase options available to them. To solve this problem, we form groups of identical items sold by different sellers as illustrated in the screen shot from one of our product pages:

While grouping items sold by individual sellers, we need to be precise in the creation of groups and ensure they contain identical products sold by different sellers. Once a group is formed, it is represented by exactly one item page with a single title, single description and a single price. Although customers can select other sellers and see their prices on the seller choice page, as shown above, the individual seller’s product content is not shown on this page. Thus, if the groups are not homogeneous, an individual seller’s product might be different from what is portrayed on the item page.

The Challenges

1. Lack of Industry Identifier in the E-commerce Arena
In the retail industry, products are identified by a unique number which is internationally known as GTIN (global trade item number). The GTIN is assigned to each product and product packaging variation. In theory, it should only be attributed to one product; however, with the proliferation of online sellers and the scale of enforcing the GTIN convention across hundreds of millions of products globally, the GTIN does not always work! Sellers often re-use GTINs leading to two different products identified with one single GTIN. As a result, errors may occur! And they do— with a frequency high enough to provoke a tangible impact on our customers. Therefore, we cannot solely rely on key identifiers (the GTIN or similar such as UPC and EAN) alone to determine if two or more offers from two or more vendors are the same product or not.

2. Negative Impact on Customer Experience
When two products are merged into one single item page, the customer may receive a different item than she or he purchased. When this happens, there is no intermediate check point in the system and no time to correct the error before the customer actually experiences this unpleasant situation. Customers will either have to wait to receive a substitute or will get a refund. All of these options prevent the customer to enjoy the purchase within the time frame that she or he had intended.

3. Scale and Execution Time
The only time in which matching errors can be detected is when the item itself is ingested on the Walmart Ecommerce content platform. This is not an easy task considering the Walmart content platform receives anywhere between 20- 60 millions updates per day and can have as many as 5 million product updates per hour. Thus, any intelligent process capable of detecting a bad match must, at the same time, be able to process tens of millions of items per day and sustain peaks of millions of items per hour. If that was not complex enough, it suffices to say that the platform does not have endless time to handle the items coming in. Therefore, in addition to handling millions of items, the detection of bad matches (mismatches) needs to take place in the millisecond time scale. This is not easy to achieve!

4. Ambiguity at Scale
Millions of items are uploaded in the platform each day, with their own text description. Description usually includes titles, further details and structured attributes describing the product features. However, since each of these items is uploaded by a different seller, there is no rule for how a title should be worded or how a description should be articulated. As a result, for each product and GTIN we may have many variations in the titles and descriptions. This causes a significant amount of ambiguity in the data that we process.

For example, the same Nike’s shoes can be uploaded as:

Title 1: Nikes

Title2: Red running shoes by Nike.

The net result of this situation, is that there cannot be a simple heuristic approach that can be applied to all products to detect the mismatches. We need to have some level of intelligence at scale that is able to distinguish between products that may otherwise have all the meta data and strong identifier identical.

The Solution

Our solution consists of two modules:
  1. Strong key-based matching: an entity that consolidates incoming items and groups them based on identifiers like GTIN and UPC, and creates a unique product ID representing a single product across the Walmart ecosystem –
  2. Deep matching: an entity that does advanced matching using machine learning techniques to identify groups of items based on title, description, price and other product attributes

1. Strong Key-Based Matching
Strong key-based matching streamlines the creation of universal product identifiers for a product across the Walmart eco system by either:
  • Creating a new product ID for a completely new item if the item’s GTIN does not exist
  • Assigning an existing product ID to an incoming item if the item’s GTIN already exists and is mapped to a product
  • Error out in case of ambiguity where the incoming item’s GTIN maps to more than 1 product

Strong key-based matching also validates the incoming item’s data for integrity by performing several validation checks. To ensure smooth transition of the platform internationally, out-of-the-box support for multi-tenancy & internationalization is built in to strong key-based matching. This service can also split products that have been erroneously matched, or merge products that are identified by multiple GTINs (this is a complexity level we will not address in here).

The service is fully integrated on the Walmart Ecommerce Catalog platform and is built using state of the art architecture & technologies. It features zero downtime, handles massive concurrency & parallelism, scale linearly and it is always available with 99.99999999% availability.

Strong key-based matching uses the following technologies:
  • Scala/Play framework for building the back-end components for what the user can see and touch
  • Akka framework and “Actor” design pattern for handling and scaling concurrency
  • Distributed database Cassandra for high availability and zero downtime
  • Multi-data center, multi-cloud based deployment strategy on One-Ops for high redundancy
  • Kafka for listening to change events

2. Deep Matching
The strong key module works well for the vast majority of cases; however, when two different products have the same GTIN it does not. Here is an example of an iPad erroneously matched to a knife because one of the sellers erroneously used the same UPC (UPC is a GTIN with 12 digits).

Deep matching looks at product title, image, description, price and other attributes to truly determine if the two products being compared are exactly the same or not. We leverage machine learning and statistical algorithms to compare products.

Nevertheless, we face some limitations :
  • The low incidence of mismatches makes it hard to obtain a lot of labeled data of matched and mismatched pairs of items since a random sampling will result in mostly correctly matched groups.
  • Randomly picking two different items for a mismatched pair does not fully work either since in a lot of scenarios mismatched items could be almost the same except for one key attribute (e.g. condition, storage size, color)
  • Titles of matching products may not be identical but contain semantically alike tokens. On the other hand, mismatching products may differ on a single attribute and consequently their corresponding titles may differ by as little as one character (e.g. 6 pack Coke vs 12 pack Coke)
  • Incoming attribute data may be missing or noisy.
  • High price differential may be a strong indicator of a mismatch but by itself is rarely conclusive since identical products may have highly varying prices across different sellers while certain kinds of mismatched products (e.g. different color or sports team branding) may still have very close or equal prices.

The deep matching module is further divided into sub-modules that leverage different algorithms. Here is a list of these algorithms:
  • Title similarity: Given a pair of product titles, quantifying their degree of similarity by using a neural network model. This model uses word level embedding pre-trained on the entire catalog of all titles to ensure the model does not over fit and have a sufficient recall.

  • Image similarity: Given a pair of product images, quantifying their degree of similarity. This model leverages deep learning algorithms to return a score of how similar product images are to one another.

  • Attribute extraction/detection: Identifying key attributes such as brand, refurbished/new/used, color, model number from available data for each product and measuring the discrepancies in the attribute values.
  • Price outlier identification: Given a group of offers for a product and their corresponding prices, identifying whether an incoming offer price is an outlier in this price distribution. Given that products in the high price bracket have a large price standard deviation among seller offers, whereas products in the lower price bracket do not we use the Dixon’s test & Bartlett’s test in the two cases.

Based on the results of the individual components, we render a match/mismatch decision. Decision are made leveraging additional machine learning processes. This is a continuous process of optimization and improvement. A sample of mismatches flagged by the algorithm are evaluated daily and we observe precision (fraction of mismatches flagged by the algorithm that are true mismatches) between 88–90%

Deep matching uses the following technologies:
  • Tensorflow as the core framework for model training
  • Keras as the high-level python framework for model training
  • Kafka for broadcasting a stop complete event to let the downstream SAP process know that barcode scans are available for processing.

The Results

  • Matching has delivered the set up of 100+ Million products on Walmart Ecommerce catalog
  • Our error rate in the strong matching module is below 0.01%, however errors due to sellers using bad GTINs varies between 1 and 5% depending on the product category.
  • More than 35 Million products have been analyzed to dateby the deep matching module and 350K are flagged and removed as invalid seller offers. That is, cases in which the seller used a wrong GTIN to identify the product (as explained above).
  • Depsite a substantial increase in product set up and product updates on Walmart Ecommerce catalog in this year, the average time from flagging to un-publishing is within 1 to 3 days.
  • We have reduced customer incidents of about 35% and are working towards solutions that will further reduce these incidents to a negligible percentage.

Any reference in the catalog matching case study to any specific commercial product, process, or service, or the use of any trade, firm or corporation name is for information and convenience purposes only, and does not constitute an endorsement or recommendation by Wal-Mart Stores, Inc.

Recently viewed jobs