menu icon

ElasticON 2025 in Paris

ElasticON has returned to the capital with plenty of new features and exciting talks. This article wraps up the event.

ElasticON 2025 in Paris

Table of contents

  1. Introduction
  2. Community Track
  3. The main event
  4. Conclusion

Introduction

The Adelean team attended ElasticON, held on January 20th and 21st. The main event, which took place in the Salle Wagram, was preceded by the Community Track sessions, organized by David Pilato and the Elastic community in France. In a festive atmosphere, we had the opportunity to explore the new features introduced in the latest versions of Elasticsearch, covering various use cases across the core pillars of the stack: monitoring, security, and search!

Community track

A wealth of fascinating use cases were shared during the Community Track, organized by the Elastic community in France. These ranged from the implementation and management of a data lake at the multinational Stellantis to semantic search applied in the e-commerce context.

During this event, we had the opportunity to present “Billion Vector Baby,” a practical guide on how to manage a vector database with over a billion vectors. A seemingly impossible task, but one that, thanks to compression capabilities through scalar or binary quantization, is now within reach for everyone.

Adelean on stage.
Adelean on stage

In the middle of the evening, Shay Banon took the time to answer some of our questions. Most of them focused on the future of Elasticsearch, particularly version 9, which will be released in 2025.

The main event

The main event was opened by Shay Banon, who provided a comprehensive overview of Elasticsearch, supported by flawless demos from Baha Azarmi. Shay emphasized the open-source positioning of Elasticsearch and reiterated the direction he set about two years ago, during the last Elastic event in Paris, namely the intention to separate storage and processing.

1st talk.
Shay on stage

Separation is also at the heart of the latest semantic search applications. It is now possible to instantiate a model at both the ingestion and search stages, creating two separate pipelines that do not impact each other.

Another central point of the presentation was the massive support for using large language models—not only for RAG (retrieval-augmented generation) but also during the ingestion phase, with automatically generated pipelines designed to better handle and integrate our logs.

Speaking of logs, OpenTelemetry support has become a strategic priority for Elasticsearch.

Numerous exciting updates were unveiled between the talks.

Uri Cohen, Product Manager at Elastic, discussed optimizations related to Elasticsearch’s growing role as a vector database and announced a new quantization method that promises the same level of compression as BBQ, but with lower loss in the relevance of search results.

2nd_talk.
Uri Cohen on stage

The provisional name for this quantization method is OSQ, or Optimized Scalar Quantization.

From SIMD to Panama, Elasticsearch has come a long way, and the future looks promising for semantic search.

The future of ES|QL is also looking bright, as joins will be introduced in upcoming versions of Elasticsearch—joins that won’t require the ENRICH command.

Another exciting feature, set to be introduced on-premise in 2025 (though already available in cloud versions, both serverless and non-serverless), is AutoOps. This feature automates and accelerates cluster management operations through an integrated RAG system. In short, the LLM (large language model) understands the cluster’s state and can recommend improvements or help resolve specific issues.

In the second part of the day, the focus shifted to two other key strengths of Elasticsearch: “Security” and “Search.” Ben Diawara explored the current challenges in cybersecurity and how artificial intelligence (AI) can enhance security operations, while also discussing how cyberattacks are becoming increasingly sophisticated, often leveraging AI itself.

AI can amplify social engineering, exploit development, and vulnerability scanning. The presentation covered the evolution of Security Information and Event Management (SIEM) systems and how AI can improve threat detection, contextual investigations, and orchestrated responses.

3rd_talk.
Ben Diawara on stage

As demonstrated during the session, Elasticsearch, by tracking logs collected through various deployable agents, can reconstruct an attack and explain its root causes, making it easier to apply timely measures in the event of a cyber threat.

The conference concluded with a focus on “Search,” particularly on the use of advanced semantic search within a RAG context. Because LLMs are nothing without effective retrieval.

We got a hands-on look at the new type available for semantic search: semantic_text, which simplifies both the ingestion and search phases. No more pipelines, no more need to specify which model to use during the search phase.

4th_talk.
Kaouther Karoui on stage

The presentation focused on how to improve the retrieval phase within a RAG system. One of the techniques highlighted was semantic chunking, which leverages sentence context to avoid crudely truncating texts that exceed the model’s token limit.

Another interesting approach was query rewriting using Hyde: starting with a simple user query, a hypothetical document is generated, vectorized, and then compared with documents in the vector database to retrieve the most relevant one.

Conclusion

In conclusion, this ElasticON in Paris gave us a preview of some of the features that will be introduced in Elasticsearch soon, confirming the team’s efforts to deliver a more complete and functional product.

Like every other conference we’ve attended over the past year, ElasticON was also a moment of sharing and team building for Adelean, as well as an opportunity to connect with others working within our domain.