menu icon

NLP in OpenSearch

A practical guide about how to import and use NLP models into OpenSearch for text analysis and inference in your search and analytics workflows

NLP in OpenSearch

Introduction

In our previous article, we delved into the implementation of NLP in Elasticsearch. Now, let’s dive into the world of NLP in OpenSearch.

As a fork of Elasticsearch, OpenSearch brings forth an array of enhanced NLP capabilities, building upon the foundation inherited from Lucene. It is crucial to grasp the disparities between the two platforms and the additional functionalities OpenSearch offers.

In this article, we will take a closer look at the process of uploading and importing NLP models into OpenSearch. By doing so, you will unlock the potential of leveraging pre-trained models for advanced text analysis and inference.

Uploading and Importing NLP Models

To utilize NLP models in OpenSearch, we need to upload and import them into the system. OpenSearch supports models in the TorchScript and ONNX formats. We have two scenarios to consider: uploading a model from Hugging Face or uploading a custom model.

  1. Uploading a Model from Hugging Face: If the desired NLP model is available in the Hugging Face library and is compatible with OpenSearch, we can directly upload it using the following API request:

    
     POST /_plugins/_ml/models/_upload
     {
     "name": "huggingface/sentence-transformers/all-MiniLM-L12-v2",
     "version": "1.0.1",
     "model_format": "TORCH_SCRIPT"
     }
    
    

    Unfortunately, only a subset of the models available in Hugging Face are compatible with this method. You can find a list of the compatible models here: supported pretrained models

  2. Uploading a Custom Model: If you have a custom NLP model that is not available in the Hugging Face library, you can prepare the model outside the OpenSearch cluster and follow these steps to upload and import it:

  • Export your custom NLP model into the TorchScript or ONNX format, as OpenSearch currently supports these formats.
  • Compress the Model: To ensure successful uploading, compress the model file into a zip archive before uploading.
  • Use the Upload API: Make a POST request to the following endpoint to upload your custom model:

POST /_plugins/_ml/models/_upload
{
"name": "all-MiniLM-L6-v2",
"version": "1.0.0",
"description": "test model",
"model_format": "TORCH_SCRIPT",
"model_config": {
"model_type": "bert",
"embedding_dimension": 384,
"framework_type": "sentence_transformers"
},
"url": "https://github.com/opensearch-project/ml-commons/raw/2.x/ml-algorithms/src/test/resources/org/opensearch/ml/engine/algorithms/text_embedding/all-MiniLM-L6-v2_torchscript_sentence-transformer.zip?raw=true"
}

Once the NLP model is uploaded, the next step is to load it into memory for deployment on ML nodes. To accomplish this, you need to obtain the model_id, which can be retrieved using the _ml/tasks endpoint.

To load the model, you can use the following API call:


POST /_plugins/_ml/models/<model_id>/_load

By default, models load on ML nodes initially. However, if you want to ensure that models only load on ML nodes, you can set the configuration plugins.ml_commons.only_run_on_ml_node to true.

Using NLP Models for Inferences

With the NLP model uploaded and imported into OpenSearch, you can utilize it to perform inferences on text data. The ML Commons plugin provides the predict API that allows you to make predictions using the imported NLP models.

Once you obtain the model ID of the chosen model, you can make a POST request to the _plugins/_ml/_predict/<algorithm_name>/<model_id> endpoint, providing the necessary input data for inference. The exact payload and input format depend on the specific NLP model and task you’re working with. Refer to the model’s documentation or examples for guidance on the input format.

For example:


POST /_plugins/_ml/_predict/text_embedding/WWQI44MBbzI2oUKAvNUt
{
"text_docs":[ "A2lean uses opensearch"],
"return_number": true,
"target_response": ["sentence_embedding"]
}

If the model is working as expected, it can be seamlessly integrated into the search workflow using the Neural Search plugin.

This can be done, following these steps:

  1. Create a Neural Search pipeline using the endpoint PUT _ingest/pipeline/<pipeline_name> Specify the pipeline name and provide the necessary request fields, such as the description and model ID, which must be indexed in OpenSearch before being used in Neural Search. An example request is as follows:

        
       PUT _ingest/pipeline/nlp-pipeline
     {
       "description": "An example neural search pipeline",
       "processors" : [
         {
           "text_embedding": {
             "model_id": "bxoDJ7IHGM14UqatWc_2j",
             "field_map": {
                "passage_text": "passage_embedding"
             }
           }
         }
       ]
     }
    
  2. Create an index for ingestion where the mapping data is aligned with the specified pipeline and set the index setting index.knn to true to enable k-NN vector fields. An example request to create an index is as follows:

    
     PUT /my-nlp-index-1
     {
       "settings": {
         "index.knn": true,
         "default_pipeline": "<pipeline_name>"
       },
       "mappings": {
         "properties": {
           "passage_embedding": {
             "type": "knn_vector",
             "dimension": int,
             "method": {
               "name": "string",
               "space_type": "string",
               "engine": "string",
               "parameters": json_object
             }
           },
           "passage_text": {
             "type": "text"
           }
         }
       }
     }
       
    
  3. Ingest documents into Neural Search by sending a simple POST request to the corresponding index. For example:

       
    POST /my-index/_doc
    {
       "passage_text": "Hello Adelean"
    }
    
    
  4. Once the indexation part is done, it’s possible to search for the documents using the model to convert the text query into a k-vector query. It is also possible to combine the vector search with the keyword search.Here an example of how this is done in Opensearch:

       
    GET my_index/_search
    {
      "query": {
        "bool" : {
          "filter": {
            "range": {
              "distance": { "lte" : 20 }
            }
          },
          "should" : [
            {
              "script_score": {
                "query": {
                  "neural": {
                    "passage_vector": {
                      "query_text": "Hello Adelean",
                      "model_id": "xzy76xswsd",
                      "k": 100
                    }
                  }
                },
                "script": {
                  "source": "_score * 1.5"
                }
              }
            }
            ,
            {
              "script_score": {
                "query": {
                  "match": { "passage_text": "Hello Adelean" }
                },
                "script": {
                  "source": "_score * 1.7"
                }
              }
            }
          ]
        }
      }
    }
       
    

OpenSearch Dashboard for model management

The OpenSearch Dashboard and its Elasticsearch counterpart, Kibana, share many similarities when it comes to NLP model management.

So, if you are accustomed to it, you won’t have any problems.

Keep in mind that the model management functionality is disabled by default in OpenSearch ( At least until the version 2.6 ) . Therefore, it is necessary to edit the opensearch_dashboard.yml file and set ml_commons_dashboards.enabled:true in order to enable it.

alt text
Overview of the model management dashboard

From the dashboard we can also get the Model ID for each model we’ve imported.

Conclusion

In this article, we have offered in-depth insights into the various aspects of importing and uploading models, creating Neural Search pipelines, and utilizing vector search with OpenSearch.

As we explored these topics, it became evident that OpenSearch brings forth notable differences compared to Elasticsearch. Each platform has its strengths, with Elasticsearch excelling in certain areas while OpenSearch introduces compelling features such as the ability to import models directly from the dashboard.

At Adelean, we eagerly look forward to new updates and functionality that will be introduced in the future.

Understanding the differences between sparse and dense semantic vectors

31/01/2024

More and more frequently, we hear about semantic search and new ways to implement it. In the latest version of OpenSearch (2.11), semantic search through sparse vectors has been introduced. But what does sparse vector mean? How does it differ from dense matrix? Let's try to clarify within this article.

Read the article

A guide to a full Open-Source RAG

01/12/2023

Delving into Retrieval-Augmented Generation (RAG). In this article we explore the foundational concepts behind RAG, emphasizing its role in enhancing contextual understanding and information synthesis. Moreover, we provide a practical guide on implementing a RAG system exclusively using open-source tools and large language model.

Read the article

Return from the DevFest Toulouse conference

19/11/2023

We are back from DevFest Toulouse, an opportunity for us to attend several conferences, train ourselves and share a personalized version of our presentation Cloner ChatGPT with Hugging Face and Elasticsearch.

Read the article

The Art of Image Vectorization - A Guide with OpenSearch

01/10/2023

BLIP-2 is a model that combines the strengths of computer vision and large language models. This powerful blend enables users to engage in conversations with their own images and generate descriptive content. In this article, we will explore how to leverage BLIP-2 for creating enriched image descriptions, followed by indexing them as vectors in Opensearch.

Read the article

Diving into NLP with the Elastic Stack

01/04/2023

An overview about NLP and a practical guide about how it can be used with the Elastic stack to enhance search capabilities.

Read the article