diff -pruN 9.1.0-3/debian/changelog 9.1.1-1/debian/changelog
--- 9.1.0-3/debian/changelog	2025-09-27 19:00:41.000000000 +0000
+++ 9.1.1-1/debian/changelog	2025-10-01 11:15:07.000000000 +0000
@@ -1,3 +1,9 @@
+python-elasticsearch (9.1.1-1) unstable; urgency=medium
+
+  * New upstream version 9.1.1
+
+ -- Karsten Schöke <karsten.schoeke@geobasis-bb.de>  Wed, 01 Oct 2025 13:15:07 +0200
+
 python-elasticsearch (9.1.0-3) unstable; urgency=medium
 
   * Team Upload.
diff -pruN 9.1.0-3/docs/reference/dsl_how_to_guides.md 9.1.1-1/docs/reference/dsl_how_to_guides.md
--- 9.1.0-3/docs/reference/dsl_how_to_guides.md	2025-07-30 08:51:18.000000000 +0000
+++ 9.1.1-1/docs/reference/dsl_how_to_guides.md	2025-09-12 13:23:45.000000000 +0000
@@ -1425,6 +1425,127 @@ print(response.took)
 If you want to inspect the contents of the `response` objects, just use its `to_dict` method to get access to the raw data for pretty printing.
 
 
+## ES|QL Queries
+
+When working with `Document` classes, you can use the ES|QL query language to retrieve documents. For this you can use the `esql_from()` and `esql_execute()` methods available to all sub-classes of `Document`.
+
+Consider the following `Employee` document definition:
+
+```python
+from elasticsearch.dsl import Document, InnerDoc, M
+
+class Address(InnerDoc):
+    address: M[str]
+    city: M[str]
+    zip_code: M[str]
+
+class Employee(Document):
+    emp_no: M[int]
+    first_name: M[str]
+    last_name: M[str]
+    height: M[float]
+    still_hired: M[bool]
+    address: M[Address]
+
+    class Index:
+        name = 'employees'
+```
+
+The `esql_from()` method creates a base ES|QL query for the index associated with the document class. The following example creates a base query for the `Employee` class:
+
+```python
+query = Employee.esql_from()
+```
+
+This query includes a `FROM` command with the index name, and a `KEEP` command that retrieves all the document attributes.
+
+To execute this query and receive the results, you can pass the query to the `esql_execute()` method:
+
+```python
+for emp in Employee.esql_execute(query):
+    print(f"{emp.name} from {emp.address.city} is {emp.height:.2f}m tall")
+```
+
+In this example, the `esql_execute()` class method runs the query and returns all the documents in the index, up to the maximum of 1000 results allowed by ES|QL. Here is a possible output from this example:
+
+```
+Kevin Macias from North Robert is 1.60m tall
+Drew Harris from Boltonshire is 1.68m tall
+Julie Williams from Maddoxshire is 1.99m tall
+Christopher Jones from Stevenbury is 1.98m tall
+Anthony Lopez from Port Sarahtown is 2.42m tall
+Tricia Stone from North Sueshire is 2.39m tall
+Katherine Ramirez from Kimberlyton is 1.83m tall
+...
+```
+
+To search for specific documents you can extend the base query with additional ES|QL commands that narrow the search criteria. The next example searches for documents that include only employees that are taller than 2 meters, sorted by their last name. It also limits the results to 4 people:
+
+```python
+query = (
+    Employee.esql_from()
+    .where(Employee.height > 2)
+    .sort(Employee.last_name)
+    .limit(4)
+)
+```
+
+When running this query with the same for-loop shown above, possible results would be:
+
+```
+Michael Adkins from North Stacey is 2.48m tall
+Kimberly Allen from Toddside is 2.24m tall
+Crystal Austin from East Michaelchester is 2.30m tall
+Rebecca Berger from Lake Adrianside is 2.40m tall
+```
+
+### Additional fields
+
+ES|QL provides a few ways to add new fields to a query, for example through the `EVAL` command. The following example shows a query that adds an evaluated field:
+
+```python
+from elasticsearch.esql import E, functions
+
+query = (
+    Employee.esql_from()
+    .eval(height_cm=functions.round(Employee.height * 100))
+    .where(E("height_cm") >= 200)
+    .sort(Employee.last_name)
+    .limit(10)
+)
+```
+
+In this example we are adding the height in centimeters to the query, calculated from the `height` document field, which is in meters. The `height_cm` calculated field is available to use in other query clauses, and in particular is referenced in `where()` in this example. Note how the new field is given as `E("height_cm")` in this clause. The `E()` wrapper tells the query builder that the argument is an ES|QL field name and not a string literal. This is done automatically for document fields that are given as class attributes, such as `Employee.height` in the `eval()`. The `E()` wrapper is only needed for fields that are not in the document.
+
+By default, the `esql_execute()` method returns only document instances. To receive any additional fields that are not part of the document in the query results, the `return_additional=True` argument can be passed to it, and then the results are returned as tuples with the document as first element, and a dictionary with the additional fields as second element:
+
+```python
+for emp, additional in Employee.esql_execute(query, return_additional=True):
+    print(emp.name, additional)
+```
+
+Example output from the query given above:
+
+```
+Michael Adkins {'height_cm': 248.0}
+Kimberly Allen {'height_cm': 224.0}
+Crystal Austin {'height_cm': 230.0}
+Rebecca Berger {'height_cm': 240.0}
+Katherine Blake {'height_cm': 214.0}
+Edward Butler {'height_cm': 246.0}
+Steven Carlson {'height_cm': 242.0}
+Mark Carter {'height_cm': 240.0}
+Joseph Castillo {'height_cm': 229.0}
+Alexander Cohen {'height_cm': 245.0}
+```
+
+### Missing fields
+
+The base query returned by the `esql_from()` method includes a `KEEP` command with the complete list of fields that are part of the document. If any subsequent clauses added to the query remove fields that are part of the document, then the `esql_execute()` method will raise an exception, because it will not be able construct complete document instances to return as results.
+
+To prevent errors, it is recommended that the `keep()` and `drop()` clauses are not used when working with `Document` instances.
+
+If a query has missing fields, it can be forced to execute without errors by passing the `ignore_missing_fields=True` argument to `esql_execute()`. When this option is used, returned documents will have any missing fields set to `None`.
 
 ## Using asyncio with Elasticsearch Python DSL [asyncio]
 
diff -pruN 9.1.0-3/docs/reference/dsl_tutorials.md 9.1.1-1/docs/reference/dsl_tutorials.md
--- 9.1.0-3/docs/reference/dsl_tutorials.md	2025-07-30 08:51:18.000000000 +0000
+++ 9.1.1-1/docs/reference/dsl_tutorials.md	2025-09-12 13:23:45.000000000 +0000
@@ -83,7 +83,7 @@ Let’s have a simple Python class repre
 
 ```python
 from datetime import datetime
-from elasticsearch.dsl import Document, Date, Integer, Keyword, Text, connections
+from elasticsearch.dsl import Document, Date, Integer, Keyword, Text, connections, mapped_field
 
 # Define a default Elasticsearch client
 connections.create_connection(hosts="https://localhost:9200")
@@ -91,7 +91,7 @@ connections.create_connection(hosts="htt
 class Article(Document):
     title: str = mapped_field(Text(analyzer='snowball', fields={'raw': Keyword()}))
     body: str = mapped_field(Text(analyzer='snowball'))
-    tags: str = mapped_field(Keyword())
+    tags: list[str] = mapped_field(Keyword())
     published_from: datetime
     lines: int
 
@@ -216,6 +216,20 @@ response = ubq.execute()
 As you can see, the `Update By Query` object provides many of the savings offered by the `Search` object, and additionally allows one to update the results of the search based on a script assigned in the same manner.
 
 
+## ES|QL Queries
+
+The DSL module features an integration with the ES|QL query builder, consisting of two methods available in all `Document` sub-classes: `esql_from()` and `esql_execute()`. Using the `Article` document from above, we can search for up to ten articles that include `"world"` in their titles with the following ES|QL query:
+
+```python
+from elasticsearch.esql import functions
+
+query = Article.esql_from().where(functions.match(Article.title, 'world')).limit(10)
+for a in Article.esql_execute(query):
+    print(a.title)
+```
+
+Review the [ES|QL Query Builder section](esql-query-builder.md) to learn more about building ES|QL queries in Python.
+
 ## Migration from the standard client [_migration_from_the_standard_client]
 
 You don’t have to port your entire application to get the benefits of the DSL module, you can start gradually by creating a `Search` object from your existing `dict`, modifying it using the API and serializing it back to a `dict`:
diff -pruN 9.1.0-3/docs/reference/esql-query-builder.md 9.1.1-1/docs/reference/esql-query-builder.md
--- 9.1.0-3/docs/reference/esql-query-builder.md	2025-07-30 08:51:18.000000000 +0000
+++ 9.1.1-1/docs/reference/esql-query-builder.md	2025-09-12 13:23:45.000000000 +0000
@@ -20,7 +20,7 @@ The ES|QL Query Builder allows you to co
 You can then see the assembled ES|QL query by printing the resulting query object:
 
 ```python
->>> query
+>>> print(query)
 FROM employees
 | SORT emp_no
 | KEEP first_name, last_name, height
@@ -28,12 +28,12 @@ FROM employees
 | LIMIT 3
 ```
 
-To execute this query, you can cast it to a string and pass the string to the `client.esql.query()` endpoint:
+To execute this query, you can pass it to the `client.esql.query()` endpoint:
 
 ```python
 >>> from elasticsearch import Elasticsearch
 >>> client = Elasticsearch(hosts=[os.environ['ELASTICSEARCH_URL']])
->>> response = client.esql.query(query=str(query))
+>>> response = client.esql.query(query=query)
 ```
 
 The response body contains a `columns` attribute with the list of columns included in the results, and a `values` attribute with the list of results for the query, each given as a list of column values. Here is a possible response body returned by the example query given above:
@@ -203,6 +203,26 @@ query = (
 )
 ```
 
+### Preventing injection attacks
+
+ES|QL, like most query languages, is vulnerable to [code injection attacks](https://en.wikipedia.org/wiki/Code_injection) if untrusted data provided by users is added to a query. To eliminate this risk, ES|QL allows untrusted data to be given separately from the query as parameters.
+
+Continuing with the example above, let's assume that the application needs a `find_employee_by_name()` function that searches for the name given as an argument. If this argument is received by the application from users, then it is considered untrusted and should not be added to the query directly. Here is how to code the function in a secure manner:
+
+```python
+def find_employee_by_name(name):
+    query = (
+        ESQL.from_("employees")
+        .keep("first_name", "last_name", "height")
+        .where(E("first_name") == E("?"))
+    )
+    return client.esql.query(query=query, params=[name])
+```
+
+Here the part of the query in which the untrusted data needs to be inserted is replaced with a parameter, which in ES|QL is defined by the question mark. When using Python expressions, the parameter must be given as `E("?")` so that it is treated as an expression and not as a literal string.
+
+The list of values given in the `params` argument to the query endpoint are assigned in order to the parameters defined in the query.
+
 ## Using ES|QL functions
 
 The ES|QL language includes a rich set of functions that can be used in expressions and conditionals. These can be included in expressions given as strings, as shown in the example below:
@@ -235,6 +255,6 @@ query = (
 )
 ```
 
-Note that arguments passed to functions are assumed to be literals. When passing field names, it is necessary to wrap them with the `E()` helper function so that they are interpreted correctly.
+Note that arguments passed to functions are assumed to be literals. When passing field names, parameters or other ES|QL expressions, it is necessary to wrap them with the `E()` helper function so that they are interpreted correctly.
 
 You can find the complete list of available functions in the Python client's [ES|QL API reference documentation](https://elasticsearch-py.readthedocs.io/en/stable/esql.html#module-elasticsearch.esql.functions).
diff -pruN 9.1.0-3/docs/release-notes/breaking-changes.md 9.1.1-1/docs/release-notes/breaking-changes.md
--- 9.1.0-3/docs/release-notes/breaking-changes.md	2025-07-30 08:51:18.000000000 +0000
+++ 9.1.1-1/docs/release-notes/breaking-changes.md	2025-09-12 13:23:45.000000000 +0000
@@ -28,7 +28,7 @@ For more information, check [PR #2840](h
  * `host_info_callback` is now `sniffed_node_callback`
  * `sniffer_timeout` is now `min_delay_between_sniffing`
  * `sniff_on_connection_fail` is now `sniff_on_node_failure`
- * `maxsize` is now `connection_per_node`
+ * `maxsize` is now `connections_per_node`
 ::::
 
 ::::{dropdown} Remove deprecated url_prefix and use_ssl host keys
@@ -50,4 +50,4 @@ Elasticsearch 9 removed the kNN search a
 **Action**<br>
  * The kNN search API has been replaced by the `knn` option in the search API since Elasticsearch 8.4.
  * The Unfreeze index API was deprecated in Elasticsearch 7.14 and has been removed in Elasticsearch 9.
- ::::
\ No newline at end of file
+ ::::
diff -pruN 9.1.0-3/docs/release-notes/index.md 9.1.1-1/docs/release-notes/index.md
--- 9.1.0-3/docs/release-notes/index.md	2025-07-30 08:51:18.000000000 +0000
+++ 9.1.1-1/docs/release-notes/index.md	2025-09-12 13:23:45.000000000 +0000
@@ -18,6 +18,25 @@ To check for security updates, go to [Se
 % *
 
 % ### Fixes [elasticsearch-python-client-next-fixes]
+## 9.1.1 (2025-09-11)
+
+* ES|QL query builder integration with the DSL module ([#3058](https://github.com/elastic/elasticsearch-py/pull/3058))
+* ES|QL query builder robustness fixes ([#3017](https://github.com/elastic/elasticsearch-py/pull/3017))
+* Fix ES|QL `multi_match()` signature ([#3052](https://github.com/elastic/elasticsearch-py/pull/3052))
+
+API
+* Add support for ES|QL query builder objects to ES|QL Query and Async Query APIs
+* Add Transform Set Upgrade Mode API
+* Fix type of `fields` parameter of Term Vectors API to array of strings
+* Fix type of `params` parameter of SQL Query API to array
+
+DSL
+* Preserve the `skip_empty` setting in `to_dict()` recursive serializations ([#3041](https://github.com/elastic/elasticsearch-py/pull/3041))
+* Add `separator_group` and `separators` attributes to `ChunkingSettings` type
+* Add `primary` attribute to `ShardFailure` type
+* Fix type of `key` attribute of `ArrayPercentilesItem` to float
+
+
 ## 9.1.0 (2025-07-30)
 
 Enhancements
diff -pruN 9.1.0-3/elasticsearch/_async/client/__init__.py 9.1.1-1/elasticsearch/_async/client/__init__.py
--- 9.1.0-3/elasticsearch/_async/client/__init__.py	2025-07-30 08:51:18.000000000 +0000
+++ 9.1.1-1/elasticsearch/_async/client/__init__.py	2025-09-12 13:23:45.000000000 +0000
@@ -608,6 +608,7 @@ class AsyncElasticsearch(BaseClient):
           <li>JavaScript: Check out <code>client.helpers.*</code></li>
           <li>.NET: Check out <code>BulkAllObservable</code></li>
           <li>PHP: Check out bulk indexing.</li>
+          <li>Ruby: Check out <code>Elasticsearch::Helpers::BulkHelper</code></li>
           </ul>
           <p><strong>Submitting bulk requests with cURL</strong></p>
           <p>If you're providing text file input to <code>curl</code>, you must use the <code>--data-binary</code> flag instead of plain <code>-d</code>.
@@ -1326,7 +1327,7 @@ class AsyncElasticsearch(BaseClient):
         )
 
     @_rewrite_parameters(
-        body_fields=("max_docs", "query", "slice"),
+        body_fields=("max_docs", "query", "slice", "sort"),
         parameter_aliases={"from": "from_"},
     )
     async def delete_by_query(
@@ -1370,7 +1371,12 @@ class AsyncElasticsearch(BaseClient):
         ] = None,
         slice: t.Optional[t.Mapping[str, t.Any]] = None,
         slices: t.Optional[t.Union[int, t.Union[str, t.Literal["auto"]]]] = None,
-        sort: t.Optional[t.Sequence[str]] = None,
+        sort: t.Optional[
+            t.Union[
+                t.Sequence[t.Union[str, t.Mapping[str, t.Any]]],
+                t.Union[str, t.Mapping[str, t.Any]],
+            ]
+        ] = None,
         stats: t.Optional[t.Sequence[str]] = None,
         terminate_after: t.Optional[int] = None,
         timeout: t.Optional[t.Union[str, t.Literal[-1], t.Literal[0]]] = None,
@@ -1502,7 +1508,7 @@ class AsyncElasticsearch(BaseClient):
         :param slice: Slice the request manually using the provided slice ID and total
             number of slices.
         :param slices: The number of slices this task should be divided into.
-        :param sort: A comma-separated list of `<field>:<direction>` pairs.
+        :param sort: A sort object that specifies the order of deleted documents.
         :param stats: The specific `tag` of the request for logging and statistical purposes.
         :param terminate_after: The maximum number of documents to collect for each shard.
             If a query reaches this limit, Elasticsearch terminates the query early.
@@ -1592,8 +1598,6 @@ class AsyncElasticsearch(BaseClient):
             __query["search_type"] = search_type
         if slices is not None:
             __query["slices"] = slices
-        if sort is not None:
-            __query["sort"] = sort
         if stats is not None:
             __query["stats"] = stats
         if terminate_after is not None:
@@ -1613,6 +1617,8 @@ class AsyncElasticsearch(BaseClient):
                 __body["query"] = query
             if slice is not None:
                 __body["slice"] = slice
+            if sort is not None:
+                __body["sort"] = sort
         __headers = {"accept": "application/json", "content-type": "application/json"}
         return await self.perform_request(  # type: ignore[return-value]
             "POST",
@@ -3870,6 +3876,13 @@ class AsyncElasticsearch(BaseClient):
           In this case, the response includes a count of the version conflicts that were encountered.
           Note that the handling of other error types is unaffected by the <code>conflicts</code> property.
           Additionally, if you opt to count version conflicts, the operation could attempt to reindex more documents from the source than <code>max_docs</code> until it has successfully indexed <code>max_docs</code> documents into the target or it has gone through every document in the source query.</p>
+          <p>It's recommended to reindex on indices with a green status. Reindexing can fail when a node shuts down or crashes.</p>
+          <ul>
+          <li>When requested with <code>wait_for_completion=true</code> (default), the request fails if the node shuts down.</li>
+          <li>When requested with <code>wait_for_completion=false</code>, a task id is returned, for use with the task management APIs. The task may disappear or fail if the node shuts down.
+          When retrying a failed reindex operation, it might be necessary to set <code>conflicts=proceed</code> or to first delete the partial destination index.
+          Additionally, dry runs, checking disk space, and fetching index recovery information can help address the root cause.</li>
+          </ul>
           <p>Refer to the linked documentation for examples of how to reindex documents.</p>
 
 
@@ -5649,7 +5662,7 @@ class AsyncElasticsearch(BaseClient):
         doc: t.Optional[t.Mapping[str, t.Any]] = None,
         error_trace: t.Optional[bool] = None,
         field_statistics: t.Optional[bool] = None,
-        fields: t.Optional[t.Union[str, t.Sequence[str]]] = None,
+        fields: t.Optional[t.Sequence[str]] = None,
         filter: t.Optional[t.Mapping[str, t.Any]] = None,
         filter_path: t.Optional[t.Union[str, t.Sequence[str]]] = None,
         human: t.Optional[bool] = None,
diff -pruN 9.1.0-3/elasticsearch/_async/client/cat.py 9.1.1-1/elasticsearch/_async/client/cat.py
--- 9.1.0-3/elasticsearch/_async/client/cat.py	2025-07-30 08:51:18.000000000 +0000
+++ 9.1.1-1/elasticsearch/_async/client/cat.py	2025-09-12 13:23:45.000000000 +0000
@@ -47,7 +47,34 @@ class CatClient(NamespacedClient):
         ] = None,
         filter_path: t.Optional[t.Union[str, t.Sequence[str]]] = None,
         format: t.Optional[str] = None,
-        h: t.Optional[t.Union[str, t.Sequence[str]]] = None,
+        h: t.Optional[
+            t.Union[
+                t.Sequence[
+                    t.Union[
+                        str,
+                        t.Literal[
+                            "alias",
+                            "filter",
+                            "index",
+                            "is_write_index",
+                            "routing.index",
+                            "routing.search",
+                        ],
+                    ]
+                ],
+                t.Union[
+                    str,
+                    t.Literal[
+                        "alias",
+                        "filter",
+                        "index",
+                        "is_write_index",
+                        "routing.index",
+                        "routing.search",
+                    ],
+                ],
+            ]
+        ] = None,
         help: t.Optional[bool] = None,
         human: t.Optional[bool] = None,
         master_timeout: t.Optional[t.Union[str, t.Literal[-1], t.Literal[0]]] = None,
@@ -74,7 +101,8 @@ class CatClient(NamespacedClient):
             values, such as `open,hidden`.
         :param format: Specifies the format to return the columnar data in, can be set
             to `text`, `json`, `cbor`, `yaml`, or `smile`.
-        :param h: List of columns to appear in the response. Supports simple wildcards.
+        :param h: A comma-separated list of columns names to display. It supports simple
+            wildcards.
         :param help: When set to `true` will output available columns. This option can't
             be combined with any other query string option.
         :param master_timeout: The period to wait for a connection to the master node.
@@ -137,7 +165,48 @@ class CatClient(NamespacedClient):
         error_trace: t.Optional[bool] = None,
         filter_path: t.Optional[t.Union[str, t.Sequence[str]]] = None,
         format: t.Optional[str] = None,
-        h: t.Optional[t.Union[str, t.Sequence[str]]] = None,
+        h: t.Optional[
+            t.Union[
+                t.Sequence[
+                    t.Union[
+                        str,
+                        t.Literal[
+                            "disk.avail",
+                            "disk.indices",
+                            "disk.indices.forecast",
+                            "disk.percent",
+                            "disk.total",
+                            "disk.used",
+                            "host",
+                            "ip",
+                            "node",
+                            "node.role",
+                            "shards",
+                            "shards.undesired",
+                            "write_load.forecast",
+                        ],
+                    ]
+                ],
+                t.Union[
+                    str,
+                    t.Literal[
+                        "disk.avail",
+                        "disk.indices",
+                        "disk.indices.forecast",
+                        "disk.percent",
+                        "disk.total",
+                        "disk.used",
+                        "host",
+                        "ip",
+                        "node",
+                        "node.role",
+                        "shards",
+                        "shards.undesired",
+                        "write_load.forecast",
+                    ],
+                ],
+            ]
+        ] = None,
         help: t.Optional[bool] = None,
         human: t.Optional[bool] = None,
         local: t.Optional[bool] = None,
@@ -161,7 +230,8 @@ class CatClient(NamespacedClient):
         :param bytes: The unit used to display byte values.
         :param format: Specifies the format to return the columnar data in, can be set
             to `text`, `json`, `cbor`, `yaml`, or `smile`.
-        :param h: List of columns to appear in the response. Supports simple wildcards.
+        :param h: A comma-separated list of columns names to display. It supports simple
+            wildcards.
         :param help: When set to `true` will output available columns. This option can't
             be combined with any other query string option.
         :param local: If `true`, the request computes the list of selected nodes from
@@ -224,7 +294,36 @@ class CatClient(NamespacedClient):
         error_trace: t.Optional[bool] = None,
         filter_path: t.Optional[t.Union[str, t.Sequence[str]]] = None,
         format: t.Optional[str] = None,
-        h: t.Optional[t.Union[str, t.Sequence[str]]] = None,
+        h: t.Optional[
+            t.Union[
+                t.Sequence[
+                    t.Union[
+                        str,
+                        t.Literal[
+                            "alias_count",
+                            "included_in",
+                            "mapping_count",
+                            "metadata_count",
+                            "name",
+                            "settings_count",
+                            "version",
+                        ],
+                    ]
+                ],
+                t.Union[
+                    str,
+                    t.Literal[
+                        "alias_count",
+                        "included_in",
+                        "mapping_count",
+                        "metadata_count",
+                        "name",
+                        "settings_count",
+                        "version",
+                    ],
+                ],
+            ]
+        ] = None,
         help: t.Optional[bool] = None,
         human: t.Optional[bool] = None,
         local: t.Optional[bool] = None,
@@ -249,7 +348,8 @@ class CatClient(NamespacedClient):
             If it is omitted, all component templates are returned.
         :param format: Specifies the format to return the columnar data in, can be set
             to `text`, `json`, `cbor`, `yaml`, or `smile`.
-        :param h: List of columns to appear in the response. Supports simple wildcards.
+        :param h: A comma-separated list of columns names to display. It supports simple
+            wildcards.
         :param help: When set to `true` will output available columns. This option can't
             be combined with any other query string option.
         :param local: If `true`, the request computes the list of selected nodes from
@@ -310,7 +410,12 @@ class CatClient(NamespacedClient):
         error_trace: t.Optional[bool] = None,
         filter_path: t.Optional[t.Union[str, t.Sequence[str]]] = None,
         format: t.Optional[str] = None,
-        h: t.Optional[t.Union[str, t.Sequence[str]]] = None,
+        h: t.Optional[
+            t.Union[
+                t.Sequence[t.Union[str, t.Literal["count", "epoch", "timestamp"]]],
+                t.Union[str, t.Literal["count", "epoch", "timestamp"]],
+            ]
+        ] = None,
         help: t.Optional[bool] = None,
         human: t.Optional[bool] = None,
         pretty: t.Optional[bool] = None,
@@ -334,7 +439,8 @@ class CatClient(NamespacedClient):
             and indices, omit this parameter or use `*` or `_all`.
         :param format: Specifies the format to return the columnar data in, can be set
             to `text`, `json`, `cbor`, `yaml`, or `smile`.
-        :param h: List of columns to appear in the response. Supports simple wildcards.
+        :param h: A comma-separated list of columns names to display. It supports simple
+            wildcards.
         :param help: When set to `true` will output available columns. This option can't
             be combined with any other query string option.
         :param s: List of columns that determine how the table should be sorted. Sorting
@@ -389,7 +495,14 @@ class CatClient(NamespacedClient):
         error_trace: t.Optional[bool] = None,
         filter_path: t.Optional[t.Union[str, t.Sequence[str]]] = None,
         format: t.Optional[str] = None,
-        h: t.Optional[t.Union[str, t.Sequence[str]]] = None,
+        h: t.Optional[
+            t.Union[
+                t.Sequence[
+                    t.Union[str, t.Literal["field", "host", "id", "ip", "node", "size"]]
+                ],
+                t.Union[str, t.Literal["field", "host", "id", "ip", "node", "size"]],
+            ]
+        ] = None,
         help: t.Optional[bool] = None,
         human: t.Optional[bool] = None,
         pretty: t.Optional[bool] = None,
@@ -412,7 +525,8 @@ class CatClient(NamespacedClient):
         :param bytes: The unit used to display byte values.
         :param format: Specifies the format to return the columnar data in, can be set
             to `text`, `json`, `cbor`, `yaml`, or `smile`.
-        :param h: List of columns to appear in the response. Supports simple wildcards.
+        :param h: A comma-separated list of columns names to display. It supports simple
+            wildcards.
         :param help: When set to `true` will output available columns. This option can't
             be combined with any other query string option.
         :param s: List of columns that determine how the table should be sorted. Sorting
@@ -465,7 +579,52 @@ class CatClient(NamespacedClient):
         error_trace: t.Optional[bool] = None,
         filter_path: t.Optional[t.Union[str, t.Sequence[str]]] = None,
         format: t.Optional[str] = None,
-        h: t.Optional[t.Union[str, t.Sequence[str]]] = None,
+        h: t.Optional[
+            t.Union[
+                t.Sequence[
+                    t.Union[
+                        str,
+                        t.Literal[
+                            "active_shards_percent",
+                            "cluster",
+                            "epoch",
+                            "init",
+                            "max_task_wait_time",
+                            "node.data",
+                            "node.total",
+                            "pending_tasks",
+                            "pri",
+                            "relo",
+                            "shards",
+                            "status",
+                            "timestamp",
+                            "unassign",
+                            "unassign.pri",
+                        ],
+                    ]
+                ],
+                t.Union[
+                    str,
+                    t.Literal[
+                        "active_shards_percent",
+                        "cluster",
+                        "epoch",
+                        "init",
+                        "max_task_wait_time",
+                        "node.data",
+                        "node.total",
+                        "pending_tasks",
+                        "pri",
+                        "relo",
+                        "shards",
+                        "status",
+                        "timestamp",
+                        "unassign",
+                        "unassign.pri",
+                    ],
+                ],
+            ]
+        ] = None,
         help: t.Optional[bool] = None,
         human: t.Optional[bool] = None,
         pretty: t.Optional[bool] = None,
@@ -495,7 +654,8 @@ class CatClient(NamespacedClient):
 
         :param format: Specifies the format to return the columnar data in, can be set
             to `text`, `json`, `cbor`, `yaml`, or `smile`.
-        :param h: List of columns to appear in the response. Supports simple wildcards.
+        :param h: A comma-separated list of columns names to display. It supports simple
+            wildcards.
         :param help: When set to `true` will output available columns. This option can't
             be combined with any other query string option.
         :param s: List of columns that determine how the table should be sorted. Sorting
@@ -583,7 +743,316 @@ class CatClient(NamespacedClient):
         ] = None,
         filter_path: t.Optional[t.Union[str, t.Sequence[str]]] = None,
         format: t.Optional[str] = None,
-        h: t.Optional[t.Union[str, t.Sequence[str]]] = None,
+        h: t.Optional[
+            t.Union[
+                t.Sequence[
+                    t.Union[
+                        str,
+                        t.Literal[
+                            "bulk.avg_size_in_bytes",
+                            "bulk.avg_time",
+                            "bulk.total_operations",
+                            "bulk.total_size_in_bytes",
+                            "bulk.total_time",
+                            "completion.size",
+                            "creation.date",
+                            "creation.date.string",
+                            "dataset.size",
+                            "dense_vector.value_count",
+                            "docs.count",
+                            "docs.deleted",
+                            "fielddata.evictions",
+                            "fielddata.memory_size",
+                            "flush.total",
+                            "flush.total_time",
+                            "get.current",
+                            "get.exists_time",
+                            "get.exists_total",
+                            "get.missing_time",
+                            "get.missing_total",
+                            "get.time",
+                            "get.total",
+                            "health",
+                            "index",
+                            "indexing.delete_current",
+                            "indexing.delete_time",
+                            "indexing.delete_total",
+                            "indexing.index_current",
+                            "indexing.index_failed",
+                            "indexing.index_failed_due_to_version_conflict",
+                            "indexing.index_time",
+                            "indexing.index_total",
+                            "memory.total",
+                            "merges.current",
+                            "merges.current_docs",
+                            "merges.current_size",
+                            "merges.total",
+                            "merges.total_docs",
+                            "merges.total_size",
+                            "merges.total_time",
+                            "pri",
+                            "pri.bulk.avg_size_in_bytes",
+                            "pri.bulk.avg_time",
+                            "pri.bulk.total_operations",
+                            "pri.bulk.total_size_in_bytes",
+                            "pri.bulk.total_time",
+                            "pri.completion.size",
+                            "pri.dense_vector.value_count",
+                            "pri.fielddata.evictions",
+                            "pri.fielddata.memory_size",
+                            "pri.flush.total",
+                            "pri.flush.total_time",
+                            "pri.get.current",
+                            "pri.get.exists_time",
+                            "pri.get.exists_total",
+                            "pri.get.missing_time",
+                            "pri.get.missing_total",
+                            "pri.get.time",
+                            "pri.get.total",
+                            "pri.indexing.delete_current",
+                            "pri.indexing.delete_time",
+                            "pri.indexing.delete_total",
+                            "pri.indexing.index_current",
+                            "pri.indexing.index_failed",
+                            "pri.indexing.index_failed_due_to_version_conflict",
+                            "pri.indexing.index_time",
+                            "pri.indexing.index_total",
+                            "pri.memory.total",
+                            "pri.merges.current",
+                            "pri.merges.current_docs",
+                            "pri.merges.current_size",
+                            "pri.merges.total",
+                            "pri.merges.total_docs",
+                            "pri.merges.total_size",
+                            "pri.merges.total_time",
+                            "pri.query_cache.evictions",
+                            "pri.query_cache.memory_size",
+                            "pri.refresh.external_time",
+                            "pri.refresh.external_total",
+                            "pri.refresh.listeners",
+                            "pri.refresh.time",
+                            "pri.refresh.total",
+                            "pri.request_cache.evictions",
+                            "pri.request_cache.hit_count",
+                            "pri.request_cache.memory_size",
+                            "pri.request_cache.miss_count",
+                            "pri.search.fetch_current",
+                            "pri.search.fetch_time",
+                            "pri.search.fetch_total",
+                            "pri.search.open_contexts",
+                            "pri.search.query_current",
+                            "pri.search.query_time",
+                            "pri.search.query_total",
+                            "pri.search.scroll_current",
+                            "pri.search.scroll_time",
+                            "pri.search.scroll_total",
+                            "pri.segments.count",
+                            "pri.segments.fixed_bitset_memory",
+                            "pri.segments.index_writer_memory",
+                            "pri.segments.memory",
+                            "pri.segments.version_map_memory",
+                            "pri.sparse_vector.value_count",
+                            "pri.store.size",
+                            "pri.suggest.current",
+                            "pri.suggest.time",
+                            "pri.suggest.total",
+                            "pri.warmer.current",
+                            "pri.warmer.total",
+                            "pri.warmer.total_time",
+                            "query_cache.evictions",
+                            "query_cache.memory_size",
+                            "refresh.external_time",
+                            "refresh.external_total",
+                            "refresh.listeners",
+                            "refresh.time",
+                            "refresh.total",
+                            "rep",
+                            "request_cache.evictions",
+                            "request_cache.hit_count",
+                            "request_cache.memory_size",
+                            "request_cache.miss_count",
+                            "search.fetch_current",
+                            "search.fetch_time",
+                            "search.fetch_total",
+                            "search.open_contexts",
+                            "search.query_current",
+                            "search.query_time",
+                            "search.query_total",
+                            "search.scroll_current",
+                            "search.scroll_time",
+                            "search.scroll_total",
+                            "segments.count",
+                            "segments.fixed_bitset_memory",
+                            "segments.index_writer_memory",
+                            "segments.memory",
+                            "segments.version_map_memory",
+                            "sparse_vector.value_count",
+                            "status",
+                            "store.size",
+                            "suggest.current",
+                            "suggest.time",
+                            "suggest.total",
+                            "uuid",
+                            "warmer.current",
+                            "warmer.total",
+                            "warmer.total_time",
+                        ],
+                    ]
+                ],
+                t.Union[
+                    str,
+                    t.Literal[
+                        "bulk.avg_size_in_bytes",
+                        "bulk.avg_time",
+                        "bulk.total_operations",
+                        "bulk.total_size_in_bytes",
+                        "bulk.total_time",
+                        "completion.size",
+                        "creation.date",
+                        "creation.date.string",
+                        "dataset.size",
+                        "dense_vector.value_count",
+                        "docs.count",
+                        "docs.deleted",
+                        "fielddata.evictions",
+                        "fielddata.memory_size",
+                        "flush.total",
+                        "flush.total_time",
+                        "get.current",
+                        "get.exists_time",
+                        "get.exists_total",
+                        "get.missing_time",
+                        "get.missing_total",
+                        "get.time",
+                        "get.total",
+                        "health",
+                        "index",
+                        "indexing.delete_current",
+                        "indexing.delete_time",
+                        "indexing.delete_total",
+                        "indexing.index_current",
+                        "indexing.index_failed",
+                        "indexing.index_failed_due_to_version_conflict",
+                        "indexing.index_time",
+                        "indexing.index_total",
+                        "memory.total",
+                        "merges.current",
+                        "merges.current_docs",
+                        "merges.current_size",
+                        "merges.total",
+                        "merges.total_docs",
+                        "merges.total_size",
+                        "merges.total_time",
+                        "pri",
+                        "pri.bulk.avg_size_in_bytes",
+                        "pri.bulk.avg_time",
+                        "pri.bulk.total_operations",
+                        "pri.bulk.total_size_in_bytes",
+                        "pri.bulk.total_time",
+                        "pri.completion.size",
+                        "pri.dense_vector.value_count",
+                        "pri.fielddata.evictions",
+                        "pri.fielddata.memory_size",
+                        "pri.flush.total",
+                        "pri.flush.total_time",
+                        "pri.get.current",
+                        "pri.get.exists_time",
+                        "pri.get.exists_total",
+                        "pri.get.missing_time",
+                        "pri.get.missing_total",
+                        "pri.get.time",
+                        "pri.get.total",
+                        "pri.indexing.delete_current",
+                        "pri.indexing.delete_time",
+                        "pri.indexing.delete_total",
+                        "pri.indexing.index_current",
+                        "pri.indexing.index_failed",
+                        "pri.indexing.index_failed_due_to_version_conflict",
+                        "pri.indexing.index_time",
+                        "pri.indexing.index_total",
+                        "pri.memory.total",
+                        "pri.merges.current",
+                        "pri.merges.current_docs",
+                        "pri.merges.current_size",
+                        "pri.merges.total",
+                        "pri.merges.total_docs",
+                        "pri.merges.total_size",
+                        "pri.merges.total_time",
+                        "pri.query_cache.evictions",
+                        "pri.query_cache.memory_size",
+                        "pri.refresh.external_time",
+                        "pri.refresh.external_total",
+                        "pri.refresh.listeners",
+                        "pri.refresh.time",
+                        "pri.refresh.total",
+                        "pri.request_cache.evictions",
+                        "pri.request_cache.hit_count",
+                        "pri.request_cache.memory_size",
+                        "pri.request_cache.miss_count",
+                        "pri.search.fetch_current",
+                        "pri.search.fetch_time",
+                        "pri.search.fetch_total",
+                        "pri.search.open_contexts",
+                        "pri.search.query_current",
+                        "pri.search.query_time",
+                        "pri.search.query_total",
+                        "pri.search.scroll_current",
+                        "pri.search.scroll_time",
+                        "pri.search.scroll_total",
+                        "pri.segments.count",
+                        "pri.segments.fixed_bitset_memory",
+                        "pri.segments.index_writer_memory",
+                        "pri.segments.memory",
+                        "pri.segments.version_map_memory",
+                        "pri.sparse_vector.value_count",
+                        "pri.store.size",
+                        "pri.suggest.current",
+                        "pri.suggest.time",
+                        "pri.suggest.total",
+                        "pri.warmer.current",
+                        "pri.warmer.total",
+                        "pri.warmer.total_time",
+                        "query_cache.evictions",
+                        "query_cache.memory_size",
+                        "refresh.external_time",
+                        "refresh.external_total",
+                        "refresh.listeners",
+                        "refresh.time",
+                        "refresh.total",
+                        "rep",
+                        "request_cache.evictions",
+                        "request_cache.hit_count",
+                        "request_cache.memory_size",
+                        "request_cache.miss_count",
+                        "search.fetch_current",
+                        "search.fetch_time",
+                        "search.fetch_total",
+                        "search.open_contexts",
+                        "search.query_current",
+                        "search.query_time",
+                        "search.query_total",
+                        "search.scroll_current",
+                        "search.scroll_time",
+                        "search.scroll_total",
+                        "segments.count",
+                        "segments.fixed_bitset_memory",
+                        "segments.index_writer_memory",
+                        "segments.memory",
+                        "segments.version_map_memory",
+                        "sparse_vector.value_count",
+                        "status",
+                        "store.size",
+                        "suggest.current",
+                        "suggest.time",
+                        "suggest.total",
+                        "uuid",
+                        "warmer.current",
+                        "warmer.total",
+                        "warmer.total_time",
+                    ],
+                ],
+            ]
+        ] = None,
         health: t.Optional[
             t.Union[str, t.Literal["green", "red", "unavailable", "unknown", "yellow"]]
         ] = None,
@@ -627,7 +1096,8 @@ class CatClient(NamespacedClient):
         :param expand_wildcards: The type of index that wildcard patterns can match.
         :param format: Specifies the format to return the columnar data in, can be set
             to `text`, `json`, `cbor`, `yaml`, or `smile`.
-        :param h: List of columns to appear in the response. Supports simple wildcards.
+        :param h: A comma-separated list of columns names to display. It supports simple
+            wildcards.
         :param health: The health status used to limit returned indices. By default,
             the response includes indices of any health status.
         :param help: When set to `true` will output available columns. This option can't
@@ -699,7 +1169,12 @@ class CatClient(NamespacedClient):
         error_trace: t.Optional[bool] = None,
         filter_path: t.Optional[t.Union[str, t.Sequence[str]]] = None,
         format: t.Optional[str] = None,
-        h: t.Optional[t.Union[str, t.Sequence[str]]] = None,
+        h: t.Optional[
+            t.Union[
+                t.Sequence[t.Union[str, t.Literal["host", "id", "ip", "node"]]],
+                t.Union[str, t.Literal["host", "id", "ip", "node"]],
+            ]
+        ] = None,
         help: t.Optional[bool] = None,
         human: t.Optional[bool] = None,
         local: t.Optional[bool] = None,
@@ -720,7 +1195,8 @@ class CatClient(NamespacedClient):
 
         :param format: Specifies the format to return the columnar data in, can be set
             to `text`, `json`, `cbor`, `yaml`, or `smile`.
-        :param h: List of columns to appear in the response. Supports simple wildcards.
+        :param h: A comma-separated list of columns names to display. It supports simple
+            wildcards.
         :param help: When set to `true` will output available columns. This option can't
             be combined with any other query string option.
         :param local: If `true`, the request computes the list of selected nodes from
@@ -1689,7 +2165,24 @@ class CatClient(NamespacedClient):
         error_trace: t.Optional[bool] = None,
         filter_path: t.Optional[t.Union[str, t.Sequence[str]]] = None,
         format: t.Optional[str] = None,
-        h: t.Optional[t.Union[str, t.Sequence[str]]] = None,
+        h: t.Optional[
+            t.Union[
+                t.Sequence[
+                    t.Union[
+                        str,
+                        t.Literal[
+                            "attr", "host", "id", "ip", "node", "pid", "port", "value"
+                        ],
+                    ]
+                ],
+                t.Union[
+                    str,
+                    t.Literal[
+                        "attr", "host", "id", "ip", "node", "pid", "port", "value"
+                    ],
+                ],
+            ]
+        ] = None,
         help: t.Optional[bool] = None,
         human: t.Optional[bool] = None,
         local: t.Optional[bool] = None,
@@ -1710,7 +2203,8 @@ class CatClient(NamespacedClient):
 
         :param format: Specifies the format to return the columnar data in, can be set
             to `text`, `json`, `cbor`, `yaml`, or `smile`.
-        :param h: List of columns to appear in the response. Supports simple wildcards.
+        :param h: A comma-separated list of columns names to display. It supports simple
+            wildcards.
         :param help: When set to `true` will output available columns. This option can't
             be combined with any other query string option.
         :param local: If `true`, the request computes the list of selected nodes from
@@ -2050,7 +2544,19 @@ class CatClient(NamespacedClient):
         error_trace: t.Optional[bool] = None,
         filter_path: t.Optional[t.Union[str, t.Sequence[str]]] = None,
         format: t.Optional[str] = None,
-        h: t.Optional[t.Union[str, t.Sequence[str]]] = None,
+        h: t.Optional[
+            t.Union[
+                t.Sequence[
+                    t.Union[
+                        str,
+                        t.Literal["insertOrder", "priority", "source", "timeInQueue"],
+                    ]
+                ],
+                t.Union[
+                    str, t.Literal["insertOrder", "priority", "source", "timeInQueue"]
+                ],
+            ]
+        ] = None,
         help: t.Optional[bool] = None,
         human: t.Optional[bool] = None,
         local: t.Optional[bool] = None,
@@ -2074,7 +2580,8 @@ class CatClient(NamespacedClient):
 
         :param format: Specifies the format to return the columnar data in, can be set
             to `text`, `json`, `cbor`, `yaml`, or `smile`.
-        :param h: List of columns to appear in the response. Supports simple wildcards.
+        :param h: A comma-separated list of columns names to display. It supports simple
+            wildcards.
         :param help: When set to `true` will output available columns. This option can't
             be combined with any other query string option.
         :param local: If `true`, the request computes the list of selected nodes from
@@ -2132,7 +2639,19 @@ class CatClient(NamespacedClient):
         error_trace: t.Optional[bool] = None,
         filter_path: t.Optional[t.Union[str, t.Sequence[str]]] = None,
         format: t.Optional[str] = None,
-        h: t.Optional[t.Union[str, t.Sequence[str]]] = None,
+        h: t.Optional[
+            t.Union[
+                t.Sequence[
+                    t.Union[
+                        str,
+                        t.Literal["component", "description", "id", "name", "version"],
+                    ]
+                ],
+                t.Union[
+                    str, t.Literal["component", "description", "id", "name", "version"]
+                ],
+            ]
+        ] = None,
         help: t.Optional[bool] = None,
         human: t.Optional[bool] = None,
         include_bootstrap: t.Optional[bool] = None,
@@ -2154,7 +2673,8 @@ class CatClient(NamespacedClient):
 
         :param format: Specifies the format to return the columnar data in, can be set
             to `text`, `json`, `cbor`, `yaml`, or `smile`.
-        :param h: List of columns to appear in the response. Supports simple wildcards.
+        :param h: A comma-separated list of columns names to display. It supports simple
+            wildcards.
         :param help: When set to `true` will output available columns. This option can't
             be combined with any other query string option.
         :param include_bootstrap: Include bootstrap plugins in the response
@@ -2972,7 +3492,52 @@ class CatClient(NamespacedClient):
         error_trace: t.Optional[bool] = None,
         filter_path: t.Optional[t.Union[str, t.Sequence[str]]] = None,
         format: t.Optional[str] = None,
-        h: t.Optional[t.Union[str, t.Sequence[str]]] = None,
+        h: t.Optional[
+            t.Union[
+                t.Sequence[
+                    t.Union[
+                        str,
+                        t.Literal[
+                            "action",
+                            "id",
+                            "ip",
+                            "node",
+                            "node_id",
+                            "parent_task_id",
+                            "port",
+                            "running_time",
+                            "running_time_ns",
+                            "start_time",
+                            "task_id",
+                            "timestamp",
+                            "type",
+                            "version",
+                            "x_opaque_id",
+                        ],
+                    ]
+                ],
+                t.Union[
+                    str,
+                    t.Literal[
+                        "action",
+                        "id",
+                        "ip",
+                        "node",
+                        "node_id",
+                        "parent_task_id",
+                        "port",
+                        "running_time",
+                        "running_time_ns",
+                        "start_time",
+                        "task_id",
+                        "timestamp",
+                        "type",
+                        "version",
+                        "x_opaque_id",
+                    ],
+                ],
+            ]
+        ] = None,
         help: t.Optional[bool] = None,
         human: t.Optional[bool] = None,
         nodes: t.Optional[t.Sequence[str]] = None,
@@ -3001,7 +3566,8 @@ class CatClient(NamespacedClient):
             shard recoveries.
         :param format: Specifies the format to return the columnar data in, can be set
             to `text`, `json`, `cbor`, `yaml`, or `smile`.
-        :param h: List of columns to appear in the response. Supports simple wildcards.
+        :param h: A comma-separated list of columns names to display. It supports simple
+            wildcards.
         :param help: When set to `true` will output available columns. This option can't
             be combined with any other query string option.
         :param nodes: Unique node identifiers, which are used to limit the response.
@@ -3070,7 +3636,24 @@ class CatClient(NamespacedClient):
         error_trace: t.Optional[bool] = None,
         filter_path: t.Optional[t.Union[str, t.Sequence[str]]] = None,
         format: t.Optional[str] = None,
-        h: t.Optional[t.Union[str, t.Sequence[str]]] = None,
+        h: t.Optional[
+            t.Union[
+                t.Sequence[
+                    t.Union[
+                        str,
+                        t.Literal[
+                            "composed_of", "index_patterns", "name", "order", "version"
+                        ],
+                    ]
+                ],
+                t.Union[
+                    str,
+                    t.Literal[
+                        "composed_of", "index_patterns", "name", "order", "version"
+                    ],
+                ],
+            ]
+        ] = None,
         help: t.Optional[bool] = None,
         human: t.Optional[bool] = None,
         local: t.Optional[bool] = None,
@@ -3094,7 +3677,8 @@ class CatClient(NamespacedClient):
             If omitted, all templates are returned.
         :param format: Specifies the format to return the columnar data in, can be set
             to `text`, `json`, `cbor`, `yaml`, or `smile`.
-        :param h: List of columns to appear in the response. Supports simple wildcards.
+        :param h: A comma-separated list of columns names to display. It supports simple
+            wildcards.
         :param help: When set to `true` will output available columns. This option can't
             be combined with any other query string option.
         :param local: If `true`, the request computes the list of selected nodes from
diff -pruN 9.1.0-3/elasticsearch/_async/client/cluster.py 9.1.1-1/elasticsearch/_async/client/cluster.py
--- 9.1.0-3/elasticsearch/_async/client/cluster.py	2025-07-30 08:51:18.000000000 +0000
+++ 9.1.1-1/elasticsearch/_async/client/cluster.py	2025-09-12 13:23:45.000000000 +0000
@@ -374,8 +374,13 @@ class ClusterClient(NamespacedClient):
         `<https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cluster-get-settings>`_
 
         :param flat_settings: If `true`, returns settings in flat format.
-        :param include_defaults: If `true`, returns default cluster settings from the
-            local node.
+        :param include_defaults: If `true`, also returns default values for all other
+            cluster settings, reflecting the values in the `elasticsearch.yml` file of
+            one of the nodes in the cluster. If the nodes in your cluster do not all
+            have the same values in their `elasticsearch.yml` config files then the values
+            returned by this API may vary from invocation to invocation and may not reflect
+            the values that Elasticsearch uses in all situations. Use the `GET _nodes/settings`
+            API to fetch the settings for each individual node in your cluster.
         :param master_timeout: Period to wait for a connection to the master node. If
             no response is received before the timeout expires, the request fails and
             returns an error.
diff -pruN 9.1.0-3/elasticsearch/_async/client/esql.py 9.1.1-1/elasticsearch/_async/client/esql.py
--- 9.1.0-3/elasticsearch/_async/client/esql.py	2025-07-30 08:51:18.000000000 +0000
+++ 9.1.1-1/elasticsearch/_async/client/esql.py	2025-09-12 13:23:45.000000000 +0000
@@ -28,6 +28,9 @@ from .utils import (
     _stability_warning,
 )
 
+if t.TYPE_CHECKING:
+    from elasticsearch.esql import ESQLBase
+
 
 class EsqlClient(NamespacedClient):
 
@@ -50,7 +53,7 @@ class EsqlClient(NamespacedClient):
     async def async_query(
         self,
         *,
-        query: t.Optional[str] = None,
+        query: t.Optional[t.Union[str, "ESQLBase"]] = None,
         allow_partial_results: t.Optional[bool] = None,
         columnar: t.Optional[bool] = None,
         delimiter: t.Optional[str] = None,
@@ -111,7 +114,12 @@ class EsqlClient(NamespacedClient):
             which has the name of all the columns.
         :param filter: Specify a Query DSL query in the filter parameter to filter the
             set of documents that an ES|QL query runs on.
-        :param format: A short version of the Accept header, for example `json` or `yaml`.
+        :param format: A short version of the Accept header, e.g. json, yaml. `csv`,
+            `tsv`, and `txt` formats will return results in a tabular format, excluding
+            other metadata fields from the response. For async requests, nothing will
+            be returned if the async query doesn't finish within the timeout. The query
+            ID and running status are available in the `X-Elasticsearch-Async-Id` and
+            `X-Elasticsearch-Async-Is-Running` HTTP headers of the response, respectively.
         :param include_ccs_metadata: When set to `true` and performing a cross-cluster
             query, the response will include an extra `_clusters` object with information
             about the clusters that participated in the search along with info such as
@@ -165,7 +173,7 @@ class EsqlClient(NamespacedClient):
             __query["pretty"] = pretty
         if not __body:
             if query is not None:
-                __body["query"] = query
+                __body["query"] = str(query)
             if columnar is not None:
                 __body["columnar"] = columnar
             if filter is not None:
@@ -405,6 +413,8 @@ class EsqlClient(NamespacedClient):
           Returns an object extended information about a running ES|QL query.</p>
 
 
+        `<https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-esql-get-query>`_
+
         :param id: The query ID
         """
         if id in SKIP_IN_PATH:
@@ -446,6 +456,8 @@ class EsqlClient(NamespacedClient):
           <p>Get running ES|QL queries information.
           Returns an object containing IDs and other information about the running ES|QL queries.</p>
 
+
+        `<https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-esql-list-queries>`_
         """
         __path_parts: t.Dict[str, str] = {}
         __path = "/_query/queries"
@@ -484,7 +496,7 @@ class EsqlClient(NamespacedClient):
     async def query(
         self,
         *,
-        query: t.Optional[str] = None,
+        query: t.Optional[t.Union[str, "ESQLBase"]] = None,
         allow_partial_results: t.Optional[bool] = None,
         columnar: t.Optional[bool] = None,
         delimiter: t.Optional[str] = None,
@@ -539,7 +551,9 @@ class EsqlClient(NamespacedClient):
             `all_columns` which has the name of all columns.
         :param filter: Specify a Query DSL query in the filter parameter to filter the
             set of documents that an ES|QL query runs on.
-        :param format: A short version of the Accept header, e.g. json, yaml.
+        :param format: A short version of the Accept header, e.g. json, yaml. `csv`,
+            `tsv`, and `txt` formats will return results in a tabular format, excluding
+            other metadata fields from the response.
         :param include_ccs_metadata: When set to `true` and performing a cross-cluster
             query, the response will include an extra `_clusters` object with information
             about the clusters that participated in the search along with info such as
@@ -579,7 +593,7 @@ class EsqlClient(NamespacedClient):
             __query["pretty"] = pretty
         if not __body:
             if query is not None:
-                __body["query"] = query
+                __body["query"] = str(query)
             if columnar is not None:
                 __body["columnar"] = columnar
             if filter is not None:
diff -pruN 9.1.0-3/elasticsearch/_async/client/indices.py 9.1.1-1/elasticsearch/_async/client/indices.py
--- 9.1.0-3/elasticsearch/_async/client/indices.py	2025-07-30 08:51:18.000000000 +0000
+++ 9.1.1-1/elasticsearch/_async/client/indices.py	2025-09-12 13:23:45.000000000 +0000
@@ -1208,7 +1208,7 @@ class IndicesClient(NamespacedClient):
           Removes the data stream options from a data stream.</p>
 
 
-        `<https://www.elastic.co/guide/en/elasticsearch/reference/9.1/index.html>`_
+        `<https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-delete-data-stream-options>`_
 
         :param name: A comma-separated list of data streams of which the data stream
             options will be deleted; use `*` to get all data streams
@@ -2568,7 +2568,7 @@ class IndicesClient(NamespacedClient):
           <p>Get the data stream options configuration of one or more data streams.</p>
 
 
-        `<https://www.elastic.co/guide/en/elasticsearch/reference/9.1/index.html>`_
+        `<https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-get-data-stream-options>`_
 
         :param name: Comma-separated list of data streams to limit the request. Supports
             wildcards (`*`). To target all data streams, omit this parameter or use `*`
@@ -3684,7 +3684,7 @@ class IndicesClient(NamespacedClient):
           Update the data stream options of the specified data streams.</p>
 
 
-        `<https://www.elastic.co/guide/en/elasticsearch/reference/9.1/index.html>`_
+        `<https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-put-data-stream-options>`_
 
         :param name: Comma-separated list of data streams used to limit the request.
             Supports wildcards (`*`). To target all data streams use `*` or `_all`.
@@ -4051,7 +4051,7 @@ class IndicesClient(NamespacedClient):
           <li>Change a field's mapping using reindexing</li>
           <li>Rename a field using a field alias</li>
           </ul>
-          <p>Learn how to use the update mapping API with practical examples in the <a href="https://www.elastic.co/docs//manage-data/data-store/mapping/update-mappings-examples">Update mapping API examples</a> guide.</p>
+          <p>Learn how to use the update mapping API with practical examples in the <a href="https://www.elastic.co/docs/manage-data/data-store/mapping/update-mappings-examples">Update mapping API examples</a> guide.</p>
 
 
         `<https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-put-mapping>`_
diff -pruN 9.1.0-3/elasticsearch/_async/client/inference.py 9.1.1-1/elasticsearch/_async/client/inference.py
--- 9.1.0-3/elasticsearch/_async/client/inference.py	2025-07-30 08:51:18.000000000 +0000
+++ 9.1.1-1/elasticsearch/_async/client/inference.py	2025-09-12 13:23:45.000000000 +0000
@@ -396,17 +396,18 @@ class InferenceClient(NamespacedClient):
           <li>Azure AI Studio (<code>completion</code>, <code>text_embedding</code>)</li>
           <li>Azure OpenAI (<code>completion</code>, <code>text_embedding</code>)</li>
           <li>Cohere (<code>completion</code>, <code>rerank</code>, <code>text_embedding</code>)</li>
-          <li>DeepSeek (<code>completion</code>, <code>chat_completion</code>)</li>
+          <li>DeepSeek (<code>chat_completion</code>, <code>completion</code>)</li>
           <li>Elasticsearch (<code>rerank</code>, <code>sparse_embedding</code>, <code>text_embedding</code> - this service is for built-in models and models uploaded through Eland)</li>
           <li>ELSER (<code>sparse_embedding</code>)</li>
           <li>Google AI Studio (<code>completion</code>, <code>text_embedding</code>)</li>
-          <li>Google Vertex AI (<code>rerank</code>, <code>text_embedding</code>)</li>
+          <li>Google Vertex AI (<code>chat_completion</code>, <code>completion</code>, <code>rerank</code>, <code>text_embedding</code>)</li>
           <li>Hugging Face (<code>chat_completion</code>, <code>completion</code>, <code>rerank</code>, <code>text_embedding</code>)</li>
+          <li>JinaAI (<code>rerank</code>, <code>text_embedding</code>)</li>
+          <li>Llama (<code>chat_completion</code>, <code>completion</code>, <code>text_embedding</code>)</li>
           <li>Mistral (<code>chat_completion</code>, <code>completion</code>, <code>text_embedding</code>)</li>
           <li>OpenAI (<code>chat_completion</code>, <code>completion</code>, <code>text_embedding</code>)</li>
-          <li>VoyageAI (<code>text_embedding</code>, <code>rerank</code>)</li>
+          <li>VoyageAI (<code>rerank</code>, <code>text_embedding</code>)</li>
           <li>Watsonx inference integration (<code>text_embedding</code>)</li>
-          <li>JinaAI (<code>text_embedding</code>, <code>rerank</code>)</li>
           </ul>
 
 
diff -pruN 9.1.0-3/elasticsearch/_async/client/sql.py 9.1.1-1/elasticsearch/_async/client/sql.py
--- 9.1.0-3/elasticsearch/_async/client/sql.py	2025-07-30 08:51:18.000000000 +0000
+++ 9.1.1-1/elasticsearch/_async/client/sql.py	2025-09-12 13:23:45.000000000 +0000
@@ -283,7 +283,7 @@ class SqlClient(NamespacedClient):
         keep_alive: t.Optional[t.Union[str, t.Literal[-1], t.Literal[0]]] = None,
         keep_on_completion: t.Optional[bool] = None,
         page_timeout: t.Optional[t.Union[str, t.Literal[-1], t.Literal[0]]] = None,
-        params: t.Optional[t.Mapping[str, t.Any]] = None,
+        params: t.Optional[t.Sequence[t.Any]] = None,
         pretty: t.Optional[bool] = None,
         query: t.Optional[str] = None,
         request_timeout: t.Optional[t.Union[str, t.Literal[-1], t.Literal[0]]] = None,
diff -pruN 9.1.0-3/elasticsearch/_async/client/transform.py 9.1.1-1/elasticsearch/_async/client/transform.py
--- 9.1.0-3/elasticsearch/_async/client/transform.py	2025-07-30 08:51:18.000000000 +0000
+++ 9.1.1-1/elasticsearch/_async/client/transform.py	2025-09-12 13:23:45.000000000 +0000
@@ -602,6 +602,66 @@ class TransformClient(NamespacedClient):
             path_parts=__path_parts,
         )
 
+    @_rewrite_parameters()
+    async def set_upgrade_mode(
+        self,
+        *,
+        enabled: t.Optional[bool] = None,
+        error_trace: t.Optional[bool] = None,
+        filter_path: t.Optional[t.Union[str, t.Sequence[str]]] = None,
+        human: t.Optional[bool] = None,
+        pretty: t.Optional[bool] = None,
+        timeout: t.Optional[t.Union[str, t.Literal[-1], t.Literal[0]]] = None,
+    ) -> ObjectApiResponse[t.Any]:
+        """
+        .. raw:: html
+
+          <p>Set upgrade_mode for transform indices.
+          Sets a cluster wide upgrade_mode setting that prepares transform
+          indices for an upgrade.
+          When upgrading your cluster, in some circumstances you must restart your
+          nodes and reindex your transform indices. In those circumstances,
+          there must be no transforms running. You can close the transforms,
+          do the upgrade, then open all the transforms again. Alternatively,
+          you can use this API to temporarily halt tasks associated with the transforms
+          and prevent new transforms from opening. You can also use this API
+          during upgrades that do not require you to reindex your transform
+          indices, though stopping transforms is not a requirement in that case.
+          You can see the current value for the upgrade_mode setting by using the get
+          transform info API.</p>
+
+
+        `<https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-transform-set-upgrade-mode>`_
+
+        :param enabled: When `true`, it enables `upgrade_mode` which temporarily halts
+            all transform tasks and prohibits new transform tasks from starting.
+        :param timeout: The time to wait for the request to be completed.
+        """
+        __path_parts: t.Dict[str, str] = {}
+        __path = "/_transform/set_upgrade_mode"
+        __query: t.Dict[str, t.Any] = {}
+        if enabled is not None:
+            __query["enabled"] = enabled
+        if error_trace is not None:
+            __query["error_trace"] = error_trace
+        if filter_path is not None:
+            __query["filter_path"] = filter_path
+        if human is not None:
+            __query["human"] = human
+        if pretty is not None:
+            __query["pretty"] = pretty
+        if timeout is not None:
+            __query["timeout"] = timeout
+        __headers = {"accept": "application/json"}
+        return await self.perform_request(  # type: ignore[return-value]
+            "POST",
+            __path,
+            params=__query,
+            headers=__headers,
+            endpoint_id="transform.set_upgrade_mode",
+            path_parts=__path_parts,
+        )
+
     @_rewrite_parameters(
         parameter_aliases={"from": "from_"},
     )
diff -pruN 9.1.0-3/elasticsearch/_sync/client/__init__.py 9.1.1-1/elasticsearch/_sync/client/__init__.py
--- 9.1.0-3/elasticsearch/_sync/client/__init__.py	2025-07-30 08:51:18.000000000 +0000
+++ 9.1.1-1/elasticsearch/_sync/client/__init__.py	2025-09-12 13:23:45.000000000 +0000
@@ -606,6 +606,7 @@ class Elasticsearch(BaseClient):
           <li>JavaScript: Check out <code>client.helpers.*</code></li>
           <li>.NET: Check out <code>BulkAllObservable</code></li>
           <li>PHP: Check out bulk indexing.</li>
+          <li>Ruby: Check out <code>Elasticsearch::Helpers::BulkHelper</code></li>
           </ul>
           <p><strong>Submitting bulk requests with cURL</strong></p>
           <p>If you're providing text file input to <code>curl</code>, you must use the <code>--data-binary</code> flag instead of plain <code>-d</code>.
@@ -1324,7 +1325,7 @@ class Elasticsearch(BaseClient):
         )
 
     @_rewrite_parameters(
-        body_fields=("max_docs", "query", "slice"),
+        body_fields=("max_docs", "query", "slice", "sort"),
         parameter_aliases={"from": "from_"},
     )
     def delete_by_query(
@@ -1368,7 +1369,12 @@ class Elasticsearch(BaseClient):
         ] = None,
         slice: t.Optional[t.Mapping[str, t.Any]] = None,
         slices: t.Optional[t.Union[int, t.Union[str, t.Literal["auto"]]]] = None,
-        sort: t.Optional[t.Sequence[str]] = None,
+        sort: t.Optional[
+            t.Union[
+                t.Sequence[t.Union[str, t.Mapping[str, t.Any]]],
+                t.Union[str, t.Mapping[str, t.Any]],
+            ]
+        ] = None,
         stats: t.Optional[t.Sequence[str]] = None,
         terminate_after: t.Optional[int] = None,
         timeout: t.Optional[t.Union[str, t.Literal[-1], t.Literal[0]]] = None,
@@ -1500,7 +1506,7 @@ class Elasticsearch(BaseClient):
         :param slice: Slice the request manually using the provided slice ID and total
             number of slices.
         :param slices: The number of slices this task should be divided into.
-        :param sort: A comma-separated list of `<field>:<direction>` pairs.
+        :param sort: A sort object that specifies the order of deleted documents.
         :param stats: The specific `tag` of the request for logging and statistical purposes.
         :param terminate_after: The maximum number of documents to collect for each shard.
             If a query reaches this limit, Elasticsearch terminates the query early.
@@ -1590,8 +1596,6 @@ class Elasticsearch(BaseClient):
             __query["search_type"] = search_type
         if slices is not None:
             __query["slices"] = slices
-        if sort is not None:
-            __query["sort"] = sort
         if stats is not None:
             __query["stats"] = stats
         if terminate_after is not None:
@@ -1611,6 +1615,8 @@ class Elasticsearch(BaseClient):
                 __body["query"] = query
             if slice is not None:
                 __body["slice"] = slice
+            if sort is not None:
+                __body["sort"] = sort
         __headers = {"accept": "application/json", "content-type": "application/json"}
         return self.perform_request(  # type: ignore[return-value]
             "POST",
@@ -3868,6 +3874,13 @@ class Elasticsearch(BaseClient):
           In this case, the response includes a count of the version conflicts that were encountered.
           Note that the handling of other error types is unaffected by the <code>conflicts</code> property.
           Additionally, if you opt to count version conflicts, the operation could attempt to reindex more documents from the source than <code>max_docs</code> until it has successfully indexed <code>max_docs</code> documents into the target or it has gone through every document in the source query.</p>
+          <p>It's recommended to reindex on indices with a green status. Reindexing can fail when a node shuts down or crashes.</p>
+          <ul>
+          <li>When requested with <code>wait_for_completion=true</code> (default), the request fails if the node shuts down.</li>
+          <li>When requested with <code>wait_for_completion=false</code>, a task id is returned, for use with the task management APIs. The task may disappear or fail if the node shuts down.
+          When retrying a failed reindex operation, it might be necessary to set <code>conflicts=proceed</code> or to first delete the partial destination index.
+          Additionally, dry runs, checking disk space, and fetching index recovery information can help address the root cause.</li>
+          </ul>
           <p>Refer to the linked documentation for examples of how to reindex documents.</p>
 
 
@@ -5647,7 +5660,7 @@ class Elasticsearch(BaseClient):
         doc: t.Optional[t.Mapping[str, t.Any]] = None,
         error_trace: t.Optional[bool] = None,
         field_statistics: t.Optional[bool] = None,
-        fields: t.Optional[t.Union[str, t.Sequence[str]]] = None,
+        fields: t.Optional[t.Sequence[str]] = None,
         filter: t.Optional[t.Mapping[str, t.Any]] = None,
         filter_path: t.Optional[t.Union[str, t.Sequence[str]]] = None,
         human: t.Optional[bool] = None,
diff -pruN 9.1.0-3/elasticsearch/_sync/client/cat.py 9.1.1-1/elasticsearch/_sync/client/cat.py
--- 9.1.0-3/elasticsearch/_sync/client/cat.py	2025-07-30 08:51:18.000000000 +0000
+++ 9.1.1-1/elasticsearch/_sync/client/cat.py	2025-09-12 13:23:45.000000000 +0000
@@ -47,7 +47,34 @@ class CatClient(NamespacedClient):
         ] = None,
         filter_path: t.Optional[t.Union[str, t.Sequence[str]]] = None,
         format: t.Optional[str] = None,
-        h: t.Optional[t.Union[str, t.Sequence[str]]] = None,
+        h: t.Optional[
+            t.Union[
+                t.Sequence[
+                    t.Union[
+                        str,
+                        t.Literal[
+                            "alias",
+                            "filter",
+                            "index",
+                            "is_write_index",
+                            "routing.index",
+                            "routing.search",
+                        ],
+                    ]
+                ],
+                t.Union[
+                    str,
+                    t.Literal[
+                        "alias",
+                        "filter",
+                        "index",
+                        "is_write_index",
+                        "routing.index",
+                        "routing.search",
+                    ],
+                ],
+            ]
+        ] = None,
         help: t.Optional[bool] = None,
         human: t.Optional[bool] = None,
         master_timeout: t.Optional[t.Union[str, t.Literal[-1], t.Literal[0]]] = None,
@@ -74,7 +101,8 @@ class CatClient(NamespacedClient):
             values, such as `open,hidden`.
         :param format: Specifies the format to return the columnar data in, can be set
             to `text`, `json`, `cbor`, `yaml`, or `smile`.
-        :param h: List of columns to appear in the response. Supports simple wildcards.
+        :param h: A comma-separated list of columns names to display. It supports simple
+            wildcards.
         :param help: When set to `true` will output available columns. This option can't
             be combined with any other query string option.
         :param master_timeout: The period to wait for a connection to the master node.
@@ -137,7 +165,48 @@ class CatClient(NamespacedClient):
         error_trace: t.Optional[bool] = None,
         filter_path: t.Optional[t.Union[str, t.Sequence[str]]] = None,
         format: t.Optional[str] = None,
-        h: t.Optional[t.Union[str, t.Sequence[str]]] = None,
+        h: t.Optional[
+            t.Union[
+                t.Sequence[
+                    t.Union[
+                        str,
+                        t.Literal[
+                            "disk.avail",
+                            "disk.indices",
+                            "disk.indices.forecast",
+                            "disk.percent",
+                            "disk.total",
+                            "disk.used",
+                            "host",
+                            "ip",
+                            "node",
+                            "node.role",
+                            "shards",
+                            "shards.undesired",
+                            "write_load.forecast",
+                        ],
+                    ]
+                ],
+                t.Union[
+                    str,
+                    t.Literal[
+                        "disk.avail",
+                        "disk.indices",
+                        "disk.indices.forecast",
+                        "disk.percent",
+                        "disk.total",
+                        "disk.used",
+                        "host",
+                        "ip",
+                        "node",
+                        "node.role",
+                        "shards",
+                        "shards.undesired",
+                        "write_load.forecast",
+                    ],
+                ],
+            ]
+        ] = None,
         help: t.Optional[bool] = None,
         human: t.Optional[bool] = None,
         local: t.Optional[bool] = None,
@@ -161,7 +230,8 @@ class CatClient(NamespacedClient):
         :param bytes: The unit used to display byte values.
         :param format: Specifies the format to return the columnar data in, can be set
             to `text`, `json`, `cbor`, `yaml`, or `smile`.
-        :param h: List of columns to appear in the response. Supports simple wildcards.
+        :param h: A comma-separated list of columns names to display. It supports simple
+            wildcards.
         :param help: When set to `true` will output available columns. This option can't
             be combined with any other query string option.
         :param local: If `true`, the request computes the list of selected nodes from
@@ -224,7 +294,36 @@ class CatClient(NamespacedClient):
         error_trace: t.Optional[bool] = None,
         filter_path: t.Optional[t.Union[str, t.Sequence[str]]] = None,
         format: t.Optional[str] = None,
-        h: t.Optional[t.Union[str, t.Sequence[str]]] = None,
+        h: t.Optional[
+            t.Union[
+                t.Sequence[
+                    t.Union[
+                        str,
+                        t.Literal[
+                            "alias_count",
+                            "included_in",
+                            "mapping_count",
+                            "metadata_count",
+                            "name",
+                            "settings_count",
+                            "version",
+                        ],
+                    ]
+                ],
+                t.Union[
+                    str,
+                    t.Literal[
+                        "alias_count",
+                        "included_in",
+                        "mapping_count",
+                        "metadata_count",
+                        "name",
+                        "settings_count",
+                        "version",
+                    ],
+                ],
+            ]
+        ] = None,
         help: t.Optional[bool] = None,
         human: t.Optional[bool] = None,
         local: t.Optional[bool] = None,
@@ -249,7 +348,8 @@ class CatClient(NamespacedClient):
             If it is omitted, all component templates are returned.
         :param format: Specifies the format to return the columnar data in, can be set
             to `text`, `json`, `cbor`, `yaml`, or `smile`.
-        :param h: List of columns to appear in the response. Supports simple wildcards.
+        :param h: A comma-separated list of columns names to display. It supports simple
+            wildcards.
         :param help: When set to `true` will output available columns. This option can't
             be combined with any other query string option.
         :param local: If `true`, the request computes the list of selected nodes from
@@ -310,7 +410,12 @@ class CatClient(NamespacedClient):
         error_trace: t.Optional[bool] = None,
         filter_path: t.Optional[t.Union[str, t.Sequence[str]]] = None,
         format: t.Optional[str] = None,
-        h: t.Optional[t.Union[str, t.Sequence[str]]] = None,
+        h: t.Optional[
+            t.Union[
+                t.Sequence[t.Union[str, t.Literal["count", "epoch", "timestamp"]]],
+                t.Union[str, t.Literal["count", "epoch", "timestamp"]],
+            ]
+        ] = None,
         help: t.Optional[bool] = None,
         human: t.Optional[bool] = None,
         pretty: t.Optional[bool] = None,
@@ -334,7 +439,8 @@ class CatClient(NamespacedClient):
             and indices, omit this parameter or use `*` or `_all`.
         :param format: Specifies the format to return the columnar data in, can be set
             to `text`, `json`, `cbor`, `yaml`, or `smile`.
-        :param h: List of columns to appear in the response. Supports simple wildcards.
+        :param h: A comma-separated list of columns names to display. It supports simple
+            wildcards.
         :param help: When set to `true` will output available columns. This option can't
             be combined with any other query string option.
         :param s: List of columns that determine how the table should be sorted. Sorting
@@ -389,7 +495,14 @@ class CatClient(NamespacedClient):
         error_trace: t.Optional[bool] = None,
         filter_path: t.Optional[t.Union[str, t.Sequence[str]]] = None,
         format: t.Optional[str] = None,
-        h: t.Optional[t.Union[str, t.Sequence[str]]] = None,
+        h: t.Optional[
+            t.Union[
+                t.Sequence[
+                    t.Union[str, t.Literal["field", "host", "id", "ip", "node", "size"]]
+                ],
+                t.Union[str, t.Literal["field", "host", "id", "ip", "node", "size"]],
+            ]
+        ] = None,
         help: t.Optional[bool] = None,
         human: t.Optional[bool] = None,
         pretty: t.Optional[bool] = None,
@@ -412,7 +525,8 @@ class CatClient(NamespacedClient):
         :param bytes: The unit used to display byte values.
         :param format: Specifies the format to return the columnar data in, can be set
             to `text`, `json`, `cbor`, `yaml`, or `smile`.
-        :param h: List of columns to appear in the response. Supports simple wildcards.
+        :param h: A comma-separated list of columns names to display. It supports simple
+            wildcards.
         :param help: When set to `true` will output available columns. This option can't
             be combined with any other query string option.
         :param s: List of columns that determine how the table should be sorted. Sorting
@@ -465,7 +579,52 @@ class CatClient(NamespacedClient):
         error_trace: t.Optional[bool] = None,
         filter_path: t.Optional[t.Union[str, t.Sequence[str]]] = None,
         format: t.Optional[str] = None,
-        h: t.Optional[t.Union[str, t.Sequence[str]]] = None,
+        h: t.Optional[
+            t.Union[
+                t.Sequence[
+                    t.Union[
+                        str,
+                        t.Literal[
+                            "active_shards_percent",
+                            "cluster",
+                            "epoch",
+                            "init",
+                            "max_task_wait_time",
+                            "node.data",
+                            "node.total",
+                            "pending_tasks",
+                            "pri",
+                            "relo",
+                            "shards",
+                            "status",
+                            "timestamp",
+                            "unassign",
+                            "unassign.pri",
+                        ],
+                    ]
+                ],
+                t.Union[
+                    str,
+                    t.Literal[
+                        "active_shards_percent",
+                        "cluster",
+                        "epoch",
+                        "init",
+                        "max_task_wait_time",
+                        "node.data",
+                        "node.total",
+                        "pending_tasks",
+                        "pri",
+                        "relo",
+                        "shards",
+                        "status",
+                        "timestamp",
+                        "unassign",
+                        "unassign.pri",
+                    ],
+                ],
+            ]
+        ] = None,
         help: t.Optional[bool] = None,
         human: t.Optional[bool] = None,
         pretty: t.Optional[bool] = None,
@@ -495,7 +654,8 @@ class CatClient(NamespacedClient):
 
         :param format: Specifies the format to return the columnar data in, can be set
             to `text`, `json`, `cbor`, `yaml`, or `smile`.
-        :param h: List of columns to appear in the response. Supports simple wildcards.
+        :param h: A comma-separated list of columns names to display. It supports simple
+            wildcards.
         :param help: When set to `true` will output available columns. This option can't
             be combined with any other query string option.
         :param s: List of columns that determine how the table should be sorted. Sorting
@@ -583,7 +743,316 @@ class CatClient(NamespacedClient):
         ] = None,
         filter_path: t.Optional[t.Union[str, t.Sequence[str]]] = None,
         format: t.Optional[str] = None,
-        h: t.Optional[t.Union[str, t.Sequence[str]]] = None,
+        h: t.Optional[
+            t.Union[
+                t.Sequence[
+                    t.Union[
+                        str,
+                        t.Literal[
+                            "bulk.avg_size_in_bytes",
+                            "bulk.avg_time",
+                            "bulk.total_operations",
+                            "bulk.total_size_in_bytes",
+                            "bulk.total_time",
+                            "completion.size",
+                            "creation.date",
+                            "creation.date.string",
+                            "dataset.size",
+                            "dense_vector.value_count",
+                            "docs.count",
+                            "docs.deleted",
+                            "fielddata.evictions",
+                            "fielddata.memory_size",
+                            "flush.total",
+                            "flush.total_time",
+                            "get.current",
+                            "get.exists_time",
+                            "get.exists_total",
+                            "get.missing_time",
+                            "get.missing_total",
+                            "get.time",
+                            "get.total",
+                            "health",
+                            "index",
+                            "indexing.delete_current",
+                            "indexing.delete_time",
+                            "indexing.delete_total",
+                            "indexing.index_current",
+                            "indexing.index_failed",
+                            "indexing.index_failed_due_to_version_conflict",
+                            "indexing.index_time",
+                            "indexing.index_total",
+                            "memory.total",
+                            "merges.current",
+                            "merges.current_docs",
+                            "merges.current_size",
+                            "merges.total",
+                            "merges.total_docs",
+                            "merges.total_size",
+                            "merges.total_time",
+                            "pri",
+                            "pri.bulk.avg_size_in_bytes",
+                            "pri.bulk.avg_time",
+                            "pri.bulk.total_operations",
+                            "pri.bulk.total_size_in_bytes",
+                            "pri.bulk.total_time",
+                            "pri.completion.size",
+                            "pri.dense_vector.value_count",
+                            "pri.fielddata.evictions",
+                            "pri.fielddata.memory_size",
+                            "pri.flush.total",
+                            "pri.flush.total_time",
+                            "pri.get.current",
+                            "pri.get.exists_time",
+                            "pri.get.exists_total",
+                            "pri.get.missing_time",
+                            "pri.get.missing_total",
+                            "pri.get.time",
+                            "pri.get.total",
+                            "pri.indexing.delete_current",
+                            "pri.indexing.delete_time",
+                            "pri.indexing.delete_total",
+                            "pri.indexing.index_current",
+                            "pri.indexing.index_failed",
+                            "pri.indexing.index_failed_due_to_version_conflict",
+                            "pri.indexing.index_time",
+                            "pri.indexing.index_total",
+                            "pri.memory.total",
+                            "pri.merges.current",
+                            "pri.merges.current_docs",
+                            "pri.merges.current_size",
+                            "pri.merges.total",
+                            "pri.merges.total_docs",
+                            "pri.merges.total_size",
+                            "pri.merges.total_time",
+                            "pri.query_cache.evictions",
+                            "pri.query_cache.memory_size",
+                            "pri.refresh.external_time",
+                            "pri.refresh.external_total",
+                            "pri.refresh.listeners",
+                            "pri.refresh.time",
+                            "pri.refresh.total",
+                            "pri.request_cache.evictions",
+                            "pri.request_cache.hit_count",
+                            "pri.request_cache.memory_size",
+                            "pri.request_cache.miss_count",
+                            "pri.search.fetch_current",
+                            "pri.search.fetch_time",
+                            "pri.search.fetch_total",
+                            "pri.search.open_contexts",
+                            "pri.search.query_current",
+                            "pri.search.query_time",
+                            "pri.search.query_total",
+                            "pri.search.scroll_current",
+                            "pri.search.scroll_time",
+                            "pri.search.scroll_total",
+                            "pri.segments.count",
+                            "pri.segments.fixed_bitset_memory",
+                            "pri.segments.index_writer_memory",
+                            "pri.segments.memory",
+                            "pri.segments.version_map_memory",
+                            "pri.sparse_vector.value_count",
+                            "pri.store.size",
+                            "pri.suggest.current",
+                            "pri.suggest.time",
+                            "pri.suggest.total",
+                            "pri.warmer.current",
+                            "pri.warmer.total",
+                            "pri.warmer.total_time",
+                            "query_cache.evictions",
+                            "query_cache.memory_size",
+                            "refresh.external_time",
+                            "refresh.external_total",
+                            "refresh.listeners",
+                            "refresh.time",
+                            "refresh.total",
+                            "rep",
+                            "request_cache.evictions",
+                            "request_cache.hit_count",
+                            "request_cache.memory_size",
+                            "request_cache.miss_count",
+                            "search.fetch_current",
+                            "search.fetch_time",
+                            "search.fetch_total",
+                            "search.open_contexts",
+                            "search.query_current",
+                            "search.query_time",
+                            "search.query_total",
+                            "search.scroll_current",
+                            "search.scroll_time",
+                            "search.scroll_total",
+                            "segments.count",
+                            "segments.fixed_bitset_memory",
+                            "segments.index_writer_memory",
+                            "segments.memory",
+                            "segments.version_map_memory",
+                            "sparse_vector.value_count",
+                            "status",
+                            "store.size",
+                            "suggest.current",
+                            "suggest.time",
+                            "suggest.total",
+                            "uuid",
+                            "warmer.current",
+                            "warmer.total",
+                            "warmer.total_time",
+                        ],
+                    ]
+                ],
+                t.Union[
+                    str,
+                    t.Literal[
+                        "bulk.avg_size_in_bytes",
+                        "bulk.avg_time",
+                        "bulk.total_operations",
+                        "bulk.total_size_in_bytes",
+                        "bulk.total_time",
+                        "completion.size",
+                        "creation.date",
+                        "creation.date.string",
+                        "dataset.size",
+                        "dense_vector.value_count",
+                        "docs.count",
+                        "docs.deleted",
+                        "fielddata.evictions",
+                        "fielddata.memory_size",
+                        "flush.total",
+                        "flush.total_time",
+                        "get.current",
+                        "get.exists_time",
+                        "get.exists_total",
+                        "get.missing_time",
+                        "get.missing_total",
+                        "get.time",
+                        "get.total",
+                        "health",
+                        "index",
+                        "indexing.delete_current",
+                        "indexing.delete_time",
+                        "indexing.delete_total",
+                        "indexing.index_current",
+                        "indexing.index_failed",
+                        "indexing.index_failed_due_to_version_conflict",
+                        "indexing.index_time",
+                        "indexing.index_total",
+                        "memory.total",
+                        "merges.current",
+                        "merges.current_docs",
+                        "merges.current_size",
+                        "merges.total",
+                        "merges.total_docs",
+                        "merges.total_size",
+                        "merges.total_time",
+                        "pri",
+                        "pri.bulk.avg_size_in_bytes",
+                        "pri.bulk.avg_time",
+                        "pri.bulk.total_operations",
+                        "pri.bulk.total_size_in_bytes",
+                        "pri.bulk.total_time",
+                        "pri.completion.size",
+                        "pri.dense_vector.value_count",
+                        "pri.fielddata.evictions",
+                        "pri.fielddata.memory_size",
+                        "pri.flush.total",
+                        "pri.flush.total_time",
+                        "pri.get.current",
+                        "pri.get.exists_time",
+                        "pri.get.exists_total",
+                        "pri.get.missing_time",
+                        "pri.get.missing_total",
+                        "pri.get.time",
+                        "pri.get.total",
+                        "pri.indexing.delete_current",
+                        "pri.indexing.delete_time",
+                        "pri.indexing.delete_total",
+                        "pri.indexing.index_current",
+                        "pri.indexing.index_failed",
+                        "pri.indexing.index_failed_due_to_version_conflict",
+                        "pri.indexing.index_time",
+                        "pri.indexing.index_total",
+                        "pri.memory.total",
+                        "pri.merges.current",
+                        "pri.merges.current_docs",
+                        "pri.merges.current_size",
+                        "pri.merges.total",
+                        "pri.merges.total_docs",
+                        "pri.merges.total_size",
+                        "pri.merges.total_time",
+                        "pri.query_cache.evictions",
+                        "pri.query_cache.memory_size",
+                        "pri.refresh.external_time",
+                        "pri.refresh.external_total",
+                        "pri.refresh.listeners",
+                        "pri.refresh.time",
+                        "pri.refresh.total",
+                        "pri.request_cache.evictions",
+                        "pri.request_cache.hit_count",
+                        "pri.request_cache.memory_size",
+                        "pri.request_cache.miss_count",
+                        "pri.search.fetch_current",
+                        "pri.search.fetch_time",
+                        "pri.search.fetch_total",
+                        "pri.search.open_contexts",
+                        "pri.search.query_current",
+                        "pri.search.query_time",
+                        "pri.search.query_total",
+                        "pri.search.scroll_current",
+                        "pri.search.scroll_time",
+                        "pri.search.scroll_total",
+                        "pri.segments.count",
+                        "pri.segments.fixed_bitset_memory",
+                        "pri.segments.index_writer_memory",
+                        "pri.segments.memory",
+                        "pri.segments.version_map_memory",
+                        "pri.sparse_vector.value_count",
+                        "pri.store.size",
+                        "pri.suggest.current",
+                        "pri.suggest.time",
+                        "pri.suggest.total",
+                        "pri.warmer.current",
+                        "pri.warmer.total",
+                        "pri.warmer.total_time",
+                        "query_cache.evictions",
+                        "query_cache.memory_size",
+                        "refresh.external_time",
+                        "refresh.external_total",
+                        "refresh.listeners",
+                        "refresh.time",
+                        "refresh.total",
+                        "rep",
+                        "request_cache.evictions",
+                        "request_cache.hit_count",
+                        "request_cache.memory_size",
+                        "request_cache.miss_count",
+                        "search.fetch_current",
+                        "search.fetch_time",
+                        "search.fetch_total",
+                        "search.open_contexts",
+                        "search.query_current",
+                        "search.query_time",
+                        "search.query_total",
+                        "search.scroll_current",
+                        "search.scroll_time",
+                        "search.scroll_total",
+                        "segments.count",
+                        "segments.fixed_bitset_memory",
+                        "segments.index_writer_memory",
+                        "segments.memory",
+                        "segments.version_map_memory",
+                        "sparse_vector.value_count",
+                        "status",
+                        "store.size",
+                        "suggest.current",
+                        "suggest.time",
+                        "suggest.total",
+                        "uuid",
+                        "warmer.current",
+                        "warmer.total",
+                        "warmer.total_time",
+                    ],
+                ],
+            ]
+        ] = None,
         health: t.Optional[
             t.Union[str, t.Literal["green", "red", "unavailable", "unknown", "yellow"]]
         ] = None,
@@ -627,7 +1096,8 @@ class CatClient(NamespacedClient):
         :param expand_wildcards: The type of index that wildcard patterns can match.
         :param format: Specifies the format to return the columnar data in, can be set
             to `text`, `json`, `cbor`, `yaml`, or `smile`.
-        :param h: List of columns to appear in the response. Supports simple wildcards.
+        :param h: A comma-separated list of columns names to display. It supports simple
+            wildcards.
         :param health: The health status used to limit returned indices. By default,
             the response includes indices of any health status.
         :param help: When set to `true` will output available columns. This option can't
@@ -699,7 +1169,12 @@ class CatClient(NamespacedClient):
         error_trace: t.Optional[bool] = None,
         filter_path: t.Optional[t.Union[str, t.Sequence[str]]] = None,
         format: t.Optional[str] = None,
-        h: t.Optional[t.Union[str, t.Sequence[str]]] = None,
+        h: t.Optional[
+            t.Union[
+                t.Sequence[t.Union[str, t.Literal["host", "id", "ip", "node"]]],
+                t.Union[str, t.Literal["host", "id", "ip", "node"]],
+            ]
+        ] = None,
         help: t.Optional[bool] = None,
         human: t.Optional[bool] = None,
         local: t.Optional[bool] = None,
@@ -720,7 +1195,8 @@ class CatClient(NamespacedClient):
 
         :param format: Specifies the format to return the columnar data in, can be set
             to `text`, `json`, `cbor`, `yaml`, or `smile`.
-        :param h: List of columns to appear in the response. Supports simple wildcards.
+        :param h: A comma-separated list of columns names to display. It supports simple
+            wildcards.
         :param help: When set to `true` will output available columns. This option can't
             be combined with any other query string option.
         :param local: If `true`, the request computes the list of selected nodes from
@@ -1689,7 +2165,24 @@ class CatClient(NamespacedClient):
         error_trace: t.Optional[bool] = None,
         filter_path: t.Optional[t.Union[str, t.Sequence[str]]] = None,
         format: t.Optional[str] = None,
-        h: t.Optional[t.Union[str, t.Sequence[str]]] = None,
+        h: t.Optional[
+            t.Union[
+                t.Sequence[
+                    t.Union[
+                        str,
+                        t.Literal[
+                            "attr", "host", "id", "ip", "node", "pid", "port", "value"
+                        ],
+                    ]
+                ],
+                t.Union[
+                    str,
+                    t.Literal[
+                        "attr", "host", "id", "ip", "node", "pid", "port", "value"
+                    ],
+                ],
+            ]
+        ] = None,
         help: t.Optional[bool] = None,
         human: t.Optional[bool] = None,
         local: t.Optional[bool] = None,
@@ -1710,7 +2203,8 @@ class CatClient(NamespacedClient):
 
         :param format: Specifies the format to return the columnar data in, can be set
             to `text`, `json`, `cbor`, `yaml`, or `smile`.
-        :param h: List of columns to appear in the response. Supports simple wildcards.
+        :param h: A comma-separated list of columns names to display. It supports simple
+            wildcards.
         :param help: When set to `true` will output available columns. This option can't
             be combined with any other query string option.
         :param local: If `true`, the request computes the list of selected nodes from
@@ -2050,7 +2544,19 @@ class CatClient(NamespacedClient):
         error_trace: t.Optional[bool] = None,
         filter_path: t.Optional[t.Union[str, t.Sequence[str]]] = None,
         format: t.Optional[str] = None,
-        h: t.Optional[t.Union[str, t.Sequence[str]]] = None,
+        h: t.Optional[
+            t.Union[
+                t.Sequence[
+                    t.Union[
+                        str,
+                        t.Literal["insertOrder", "priority", "source", "timeInQueue"],
+                    ]
+                ],
+                t.Union[
+                    str, t.Literal["insertOrder", "priority", "source", "timeInQueue"]
+                ],
+            ]
+        ] = None,
         help: t.Optional[bool] = None,
         human: t.Optional[bool] = None,
         local: t.Optional[bool] = None,
@@ -2074,7 +2580,8 @@ class CatClient(NamespacedClient):
 
         :param format: Specifies the format to return the columnar data in, can be set
             to `text`, `json`, `cbor`, `yaml`, or `smile`.
-        :param h: List of columns to appear in the response. Supports simple wildcards.
+        :param h: A comma-separated list of columns names to display. It supports simple
+            wildcards.
         :param help: When set to `true` will output available columns. This option can't
             be combined with any other query string option.
         :param local: If `true`, the request computes the list of selected nodes from
@@ -2132,7 +2639,19 @@ class CatClient(NamespacedClient):
         error_trace: t.Optional[bool] = None,
         filter_path: t.Optional[t.Union[str, t.Sequence[str]]] = None,
         format: t.Optional[str] = None,
-        h: t.Optional[t.Union[str, t.Sequence[str]]] = None,
+        h: t.Optional[
+            t.Union[
+                t.Sequence[
+                    t.Union[
+                        str,
+                        t.Literal["component", "description", "id", "name", "version"],
+                    ]
+                ],
+                t.Union[
+                    str, t.Literal["component", "description", "id", "name", "version"]
+                ],
+            ]
+        ] = None,
         help: t.Optional[bool] = None,
         human: t.Optional[bool] = None,
         include_bootstrap: t.Optional[bool] = None,
@@ -2154,7 +2673,8 @@ class CatClient(NamespacedClient):
 
         :param format: Specifies the format to return the columnar data in, can be set
             to `text`, `json`, `cbor`, `yaml`, or `smile`.
-        :param h: List of columns to appear in the response. Supports simple wildcards.
+        :param h: A comma-separated list of columns names to display. It supports simple
+            wildcards.
         :param help: When set to `true` will output available columns. This option can't
             be combined with any other query string option.
         :param include_bootstrap: Include bootstrap plugins in the response
@@ -2972,7 +3492,52 @@ class CatClient(NamespacedClient):
         error_trace: t.Optional[bool] = None,
         filter_path: t.Optional[t.Union[str, t.Sequence[str]]] = None,
         format: t.Optional[str] = None,
-        h: t.Optional[t.Union[str, t.Sequence[str]]] = None,
+        h: t.Optional[
+            t.Union[
+                t.Sequence[
+                    t.Union[
+                        str,
+                        t.Literal[
+                            "action",
+                            "id",
+                            "ip",
+                            "node",
+                            "node_id",
+                            "parent_task_id",
+                            "port",
+                            "running_time",
+                            "running_time_ns",
+                            "start_time",
+                            "task_id",
+                            "timestamp",
+                            "type",
+                            "version",
+                            "x_opaque_id",
+                        ],
+                    ]
+                ],
+                t.Union[
+                    str,
+                    t.Literal[
+                        "action",
+                        "id",
+                        "ip",
+                        "node",
+                        "node_id",
+                        "parent_task_id",
+                        "port",
+                        "running_time",
+                        "running_time_ns",
+                        "start_time",
+                        "task_id",
+                        "timestamp",
+                        "type",
+                        "version",
+                        "x_opaque_id",
+                    ],
+                ],
+            ]
+        ] = None,
         help: t.Optional[bool] = None,
         human: t.Optional[bool] = None,
         nodes: t.Optional[t.Sequence[str]] = None,
@@ -3001,7 +3566,8 @@ class CatClient(NamespacedClient):
             shard recoveries.
         :param format: Specifies the format to return the columnar data in, can be set
             to `text`, `json`, `cbor`, `yaml`, or `smile`.
-        :param h: List of columns to appear in the response. Supports simple wildcards.
+        :param h: A comma-separated list of columns names to display. It supports simple
+            wildcards.
         :param help: When set to `true` will output available columns. This option can't
             be combined with any other query string option.
         :param nodes: Unique node identifiers, which are used to limit the response.
@@ -3070,7 +3636,24 @@ class CatClient(NamespacedClient):
         error_trace: t.Optional[bool] = None,
         filter_path: t.Optional[t.Union[str, t.Sequence[str]]] = None,
         format: t.Optional[str] = None,
-        h: t.Optional[t.Union[str, t.Sequence[str]]] = None,
+        h: t.Optional[
+            t.Union[
+                t.Sequence[
+                    t.Union[
+                        str,
+                        t.Literal[
+                            "composed_of", "index_patterns", "name", "order", "version"
+                        ],
+                    ]
+                ],
+                t.Union[
+                    str,
+                    t.Literal[
+                        "composed_of", "index_patterns", "name", "order", "version"
+                    ],
+                ],
+            ]
+        ] = None,
         help: t.Optional[bool] = None,
         human: t.Optional[bool] = None,
         local: t.Optional[bool] = None,
@@ -3094,7 +3677,8 @@ class CatClient(NamespacedClient):
             If omitted, all templates are returned.
         :param format: Specifies the format to return the columnar data in, can be set
             to `text`, `json`, `cbor`, `yaml`, or `smile`.
-        :param h: List of columns to appear in the response. Supports simple wildcards.
+        :param h: A comma-separated list of columns names to display. It supports simple
+            wildcards.
         :param help: When set to `true` will output available columns. This option can't
             be combined with any other query string option.
         :param local: If `true`, the request computes the list of selected nodes from
diff -pruN 9.1.0-3/elasticsearch/_sync/client/cluster.py 9.1.1-1/elasticsearch/_sync/client/cluster.py
--- 9.1.0-3/elasticsearch/_sync/client/cluster.py	2025-07-30 08:51:18.000000000 +0000
+++ 9.1.1-1/elasticsearch/_sync/client/cluster.py	2025-09-12 13:23:45.000000000 +0000
@@ -374,8 +374,13 @@ class ClusterClient(NamespacedClient):
         `<https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cluster-get-settings>`_
 
         :param flat_settings: If `true`, returns settings in flat format.
-        :param include_defaults: If `true`, returns default cluster settings from the
-            local node.
+        :param include_defaults: If `true`, also returns default values for all other
+            cluster settings, reflecting the values in the `elasticsearch.yml` file of
+            one of the nodes in the cluster. If the nodes in your cluster do not all
+            have the same values in their `elasticsearch.yml` config files then the values
+            returned by this API may vary from invocation to invocation and may not reflect
+            the values that Elasticsearch uses in all situations. Use the `GET _nodes/settings`
+            API to fetch the settings for each individual node in your cluster.
         :param master_timeout: Period to wait for a connection to the master node. If
             no response is received before the timeout expires, the request fails and
             returns an error.
diff -pruN 9.1.0-3/elasticsearch/_sync/client/esql.py 9.1.1-1/elasticsearch/_sync/client/esql.py
--- 9.1.0-3/elasticsearch/_sync/client/esql.py	2025-07-30 08:51:18.000000000 +0000
+++ 9.1.1-1/elasticsearch/_sync/client/esql.py	2025-09-12 13:23:45.000000000 +0000
@@ -28,6 +28,9 @@ from .utils import (
     _stability_warning,
 )
 
+if t.TYPE_CHECKING:
+    from elasticsearch.esql import ESQLBase
+
 
 class EsqlClient(NamespacedClient):
 
@@ -50,7 +53,7 @@ class EsqlClient(NamespacedClient):
     def async_query(
         self,
         *,
-        query: t.Optional[str] = None,
+        query: t.Optional[t.Union[str, "ESQLBase"]] = None,
         allow_partial_results: t.Optional[bool] = None,
         columnar: t.Optional[bool] = None,
         delimiter: t.Optional[str] = None,
@@ -111,7 +114,12 @@ class EsqlClient(NamespacedClient):
             which has the name of all the columns.
         :param filter: Specify a Query DSL query in the filter parameter to filter the
             set of documents that an ES|QL query runs on.
-        :param format: A short version of the Accept header, for example `json` or `yaml`.
+        :param format: A short version of the Accept header, e.g. json, yaml. `csv`,
+            `tsv`, and `txt` formats will return results in a tabular format, excluding
+            other metadata fields from the response. For async requests, nothing will
+            be returned if the async query doesn't finish within the timeout. The query
+            ID and running status are available in the `X-Elasticsearch-Async-Id` and
+            `X-Elasticsearch-Async-Is-Running` HTTP headers of the response, respectively.
         :param include_ccs_metadata: When set to `true` and performing a cross-cluster
             query, the response will include an extra `_clusters` object with information
             about the clusters that participated in the search along with info such as
@@ -165,7 +173,7 @@ class EsqlClient(NamespacedClient):
             __query["pretty"] = pretty
         if not __body:
             if query is not None:
-                __body["query"] = query
+                __body["query"] = str(query)
             if columnar is not None:
                 __body["columnar"] = columnar
             if filter is not None:
@@ -405,6 +413,8 @@ class EsqlClient(NamespacedClient):
           Returns an object extended information about a running ES|QL query.</p>
 
 
+        `<https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-esql-get-query>`_
+
         :param id: The query ID
         """
         if id in SKIP_IN_PATH:
@@ -446,6 +456,8 @@ class EsqlClient(NamespacedClient):
           <p>Get running ES|QL queries information.
           Returns an object containing IDs and other information about the running ES|QL queries.</p>
 
+
+        `<https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-esql-list-queries>`_
         """
         __path_parts: t.Dict[str, str] = {}
         __path = "/_query/queries"
@@ -484,7 +496,7 @@ class EsqlClient(NamespacedClient):
     def query(
         self,
         *,
-        query: t.Optional[str] = None,
+        query: t.Optional[t.Union[str, "ESQLBase"]] = None,
         allow_partial_results: t.Optional[bool] = None,
         columnar: t.Optional[bool] = None,
         delimiter: t.Optional[str] = None,
@@ -539,7 +551,9 @@ class EsqlClient(NamespacedClient):
             `all_columns` which has the name of all columns.
         :param filter: Specify a Query DSL query in the filter parameter to filter the
             set of documents that an ES|QL query runs on.
-        :param format: A short version of the Accept header, e.g. json, yaml.
+        :param format: A short version of the Accept header, e.g. json, yaml. `csv`,
+            `tsv`, and `txt` formats will return results in a tabular format, excluding
+            other metadata fields from the response.
         :param include_ccs_metadata: When set to `true` and performing a cross-cluster
             query, the response will include an extra `_clusters` object with information
             about the clusters that participated in the search along with info such as
@@ -579,7 +593,7 @@ class EsqlClient(NamespacedClient):
             __query["pretty"] = pretty
         if not __body:
             if query is not None:
-                __body["query"] = query
+                __body["query"] = str(query)
             if columnar is not None:
                 __body["columnar"] = columnar
             if filter is not None:
diff -pruN 9.1.0-3/elasticsearch/_sync/client/indices.py 9.1.1-1/elasticsearch/_sync/client/indices.py
--- 9.1.0-3/elasticsearch/_sync/client/indices.py	2025-07-30 08:51:18.000000000 +0000
+++ 9.1.1-1/elasticsearch/_sync/client/indices.py	2025-09-12 13:23:45.000000000 +0000
@@ -1208,7 +1208,7 @@ class IndicesClient(NamespacedClient):
           Removes the data stream options from a data stream.</p>
 
 
-        `<https://www.elastic.co/guide/en/elasticsearch/reference/9.1/index.html>`_
+        `<https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-delete-data-stream-options>`_
 
         :param name: A comma-separated list of data streams of which the data stream
             options will be deleted; use `*` to get all data streams
@@ -2568,7 +2568,7 @@ class IndicesClient(NamespacedClient):
           <p>Get the data stream options configuration of one or more data streams.</p>
 
 
-        `<https://www.elastic.co/guide/en/elasticsearch/reference/9.1/index.html>`_
+        `<https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-get-data-stream-options>`_
 
         :param name: Comma-separated list of data streams to limit the request. Supports
             wildcards (`*`). To target all data streams, omit this parameter or use `*`
@@ -3684,7 +3684,7 @@ class IndicesClient(NamespacedClient):
           Update the data stream options of the specified data streams.</p>
 
 
-        `<https://www.elastic.co/guide/en/elasticsearch/reference/9.1/index.html>`_
+        `<https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-put-data-stream-options>`_
 
         :param name: Comma-separated list of data streams used to limit the request.
             Supports wildcards (`*`). To target all data streams use `*` or `_all`.
@@ -4051,7 +4051,7 @@ class IndicesClient(NamespacedClient):
           <li>Change a field's mapping using reindexing</li>
           <li>Rename a field using a field alias</li>
           </ul>
-          <p>Learn how to use the update mapping API with practical examples in the <a href="https://www.elastic.co/docs//manage-data/data-store/mapping/update-mappings-examples">Update mapping API examples</a> guide.</p>
+          <p>Learn how to use the update mapping API with practical examples in the <a href="https://www.elastic.co/docs/manage-data/data-store/mapping/update-mappings-examples">Update mapping API examples</a> guide.</p>
 
 
         `<https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-put-mapping>`_
diff -pruN 9.1.0-3/elasticsearch/_sync/client/inference.py 9.1.1-1/elasticsearch/_sync/client/inference.py
--- 9.1.0-3/elasticsearch/_sync/client/inference.py	2025-07-30 08:51:18.000000000 +0000
+++ 9.1.1-1/elasticsearch/_sync/client/inference.py	2025-09-12 13:23:45.000000000 +0000
@@ -396,17 +396,18 @@ class InferenceClient(NamespacedClient):
           <li>Azure AI Studio (<code>completion</code>, <code>text_embedding</code>)</li>
           <li>Azure OpenAI (<code>completion</code>, <code>text_embedding</code>)</li>
           <li>Cohere (<code>completion</code>, <code>rerank</code>, <code>text_embedding</code>)</li>
-          <li>DeepSeek (<code>completion</code>, <code>chat_completion</code>)</li>
+          <li>DeepSeek (<code>chat_completion</code>, <code>completion</code>)</li>
           <li>Elasticsearch (<code>rerank</code>, <code>sparse_embedding</code>, <code>text_embedding</code> - this service is for built-in models and models uploaded through Eland)</li>
           <li>ELSER (<code>sparse_embedding</code>)</li>
           <li>Google AI Studio (<code>completion</code>, <code>text_embedding</code>)</li>
-          <li>Google Vertex AI (<code>rerank</code>, <code>text_embedding</code>)</li>
+          <li>Google Vertex AI (<code>chat_completion</code>, <code>completion</code>, <code>rerank</code>, <code>text_embedding</code>)</li>
           <li>Hugging Face (<code>chat_completion</code>, <code>completion</code>, <code>rerank</code>, <code>text_embedding</code>)</li>
+          <li>JinaAI (<code>rerank</code>, <code>text_embedding</code>)</li>
+          <li>Llama (<code>chat_completion</code>, <code>completion</code>, <code>text_embedding</code>)</li>
           <li>Mistral (<code>chat_completion</code>, <code>completion</code>, <code>text_embedding</code>)</li>
           <li>OpenAI (<code>chat_completion</code>, <code>completion</code>, <code>text_embedding</code>)</li>
-          <li>VoyageAI (<code>text_embedding</code>, <code>rerank</code>)</li>
+          <li>VoyageAI (<code>rerank</code>, <code>text_embedding</code>)</li>
           <li>Watsonx inference integration (<code>text_embedding</code>)</li>
-          <li>JinaAI (<code>text_embedding</code>, <code>rerank</code>)</li>
           </ul>
 
 
diff -pruN 9.1.0-3/elasticsearch/_sync/client/sql.py 9.1.1-1/elasticsearch/_sync/client/sql.py
--- 9.1.0-3/elasticsearch/_sync/client/sql.py	2025-07-30 08:51:18.000000000 +0000
+++ 9.1.1-1/elasticsearch/_sync/client/sql.py	2025-09-12 13:23:45.000000000 +0000
@@ -283,7 +283,7 @@ class SqlClient(NamespacedClient):
         keep_alive: t.Optional[t.Union[str, t.Literal[-1], t.Literal[0]]] = None,
         keep_on_completion: t.Optional[bool] = None,
         page_timeout: t.Optional[t.Union[str, t.Literal[-1], t.Literal[0]]] = None,
-        params: t.Optional[t.Mapping[str, t.Any]] = None,
+        params: t.Optional[t.Sequence[t.Any]] = None,
         pretty: t.Optional[bool] = None,
         query: t.Optional[str] = None,
         request_timeout: t.Optional[t.Union[str, t.Literal[-1], t.Literal[0]]] = None,
diff -pruN 9.1.0-3/elasticsearch/_sync/client/transform.py 9.1.1-1/elasticsearch/_sync/client/transform.py
--- 9.1.0-3/elasticsearch/_sync/client/transform.py	2025-07-30 08:51:18.000000000 +0000
+++ 9.1.1-1/elasticsearch/_sync/client/transform.py	2025-09-12 13:23:45.000000000 +0000
@@ -602,6 +602,66 @@ class TransformClient(NamespacedClient):
             path_parts=__path_parts,
         )
 
+    @_rewrite_parameters()
+    def set_upgrade_mode(
+        self,
+        *,
+        enabled: t.Optional[bool] = None,
+        error_trace: t.Optional[bool] = None,
+        filter_path: t.Optional[t.Union[str, t.Sequence[str]]] = None,
+        human: t.Optional[bool] = None,
+        pretty: t.Optional[bool] = None,
+        timeout: t.Optional[t.Union[str, t.Literal[-1], t.Literal[0]]] = None,
+    ) -> ObjectApiResponse[t.Any]:
+        """
+        .. raw:: html
+
+          <p>Set upgrade_mode for transform indices.
+          Sets a cluster wide upgrade_mode setting that prepares transform
+          indices for an upgrade.
+          When upgrading your cluster, in some circumstances you must restart your
+          nodes and reindex your transform indices. In those circumstances,
+          there must be no transforms running. You can close the transforms,
+          do the upgrade, then open all the transforms again. Alternatively,
+          you can use this API to temporarily halt tasks associated with the transforms
+          and prevent new transforms from opening. You can also use this API
+          during upgrades that do not require you to reindex your transform
+          indices, though stopping transforms is not a requirement in that case.
+          You can see the current value for the upgrade_mode setting by using the get
+          transform info API.</p>
+
+
+        `<https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-transform-set-upgrade-mode>`_
+
+        :param enabled: When `true`, it enables `upgrade_mode` which temporarily halts
+            all transform tasks and prohibits new transform tasks from starting.
+        :param timeout: The time to wait for the request to be completed.
+        """
+        __path_parts: t.Dict[str, str] = {}
+        __path = "/_transform/set_upgrade_mode"
+        __query: t.Dict[str, t.Any] = {}
+        if enabled is not None:
+            __query["enabled"] = enabled
+        if error_trace is not None:
+            __query["error_trace"] = error_trace
+        if filter_path is not None:
+            __query["filter_path"] = filter_path
+        if human is not None:
+            __query["human"] = human
+        if pretty is not None:
+            __query["pretty"] = pretty
+        if timeout is not None:
+            __query["timeout"] = timeout
+        __headers = {"accept": "application/json"}
+        return self.perform_request(  # type: ignore[return-value]
+            "POST",
+            __path,
+            params=__query,
+            headers=__headers,
+            endpoint_id="transform.set_upgrade_mode",
+            path_parts=__path_parts,
+        )
+
     @_rewrite_parameters(
         parameter_aliases={"from": "from_"},
     )
diff -pruN 9.1.0-3/elasticsearch/_version.py 9.1.1-1/elasticsearch/_version.py
--- 9.1.0-3/elasticsearch/_version.py	2025-07-30 08:51:18.000000000 +0000
+++ 9.1.1-1/elasticsearch/_version.py	2025-09-12 13:23:45.000000000 +0000
@@ -15,4 +15,4 @@
 #  specific language governing permissions and limitations
 #  under the License.
 
-__versionstr__ = "9.1.0"
+__versionstr__ = "9.1.1"
diff -pruN 9.1.0-3/elasticsearch/dsl/_async/document.py 9.1.1-1/elasticsearch/dsl/_async/document.py
--- 9.1.0-3/elasticsearch/dsl/_async/document.py	2025-07-30 08:51:18.000000000 +0000
+++ 9.1.1-1/elasticsearch/dsl/_async/document.py	2025-09-12 13:23:45.000000000 +0000
@@ -20,6 +20,7 @@ from typing import (
     TYPE_CHECKING,
     Any,
     AsyncIterable,
+    AsyncIterator,
     Dict,
     List,
     Optional,
@@ -42,6 +43,7 @@ from .search import AsyncSearch
 
 if TYPE_CHECKING:
     from elasticsearch import AsyncElasticsearch
+    from elasticsearch.esql.esql import ESQLBase
 
 
 class AsyncIndexMeta(DocumentMeta):
@@ -520,3 +522,85 @@ class AsyncDocument(DocumentBase, metacl
                 return action
 
         return await async_bulk(es, Generate(actions), **kwargs)
+
+    @classmethod
+    async def esql_execute(
+        cls,
+        query: "ESQLBase",
+        return_additional: bool = False,
+        ignore_missing_fields: bool = False,
+        using: Optional[AsyncUsingType] = None,
+        **kwargs: Any,
+    ) -> AsyncIterator[Union[Self, Tuple[Self, Dict[str, Any]]]]:
+        """
+        Execute the given ES|QL query and return an iterator of 2-element tuples,
+        where the first element is an instance of this ``Document`` and the
+        second a dictionary with any remaining columns requested in the query.
+
+        :arg query: an ES|QL query object created with the ``esql_from()`` method.
+        :arg return_additional: if ``False`` (the default), this method returns
+            document objects. If set to ``True``, the method returns tuples with
+            a document in the first element and a dictionary with any additional
+            columns returned by the query in the second element.
+        :arg ignore_missing_fields: if ``False`` (the default), all the fields of
+            the document must be present in the query, or else an exception is
+            raised. Set to ``True`` to allow missing fields, which will result in
+            partially initialized document objects.
+        :arg using: connection alias to use, defaults to ``'default'``
+        :arg kwargs: additional options for the ``client.esql.query()`` function.
+        """
+        es = cls._get_connection(using)
+        response = await es.esql.query(query=str(query), **kwargs)
+        query_columns = [col["name"] for col in response.body.get("columns", [])]
+
+        # Here we get the list of columns defined in the document, which are the
+        # columns that we will take from each result to assemble the document
+        # object.
+        # When `for_esql=False` is passed below by default, the list will include
+        # nested fields, which ES|QL does not return, causing an error. When passing
+        # `ignore_missing_fields=True` the list will be generated with
+        # `for_esql=True`, so the error will not occur, but the documents will
+        # not have any Nested objects in them.
+        doc_fields = set(cls._get_field_names(for_esql=ignore_missing_fields))
+        if not ignore_missing_fields and not doc_fields.issubset(set(query_columns)):
+            raise ValueError(
+                f"Not all fields of {cls.__name__} were returned by the query. "
+                "Make sure your document does not use Nested fields, which are "
+                "currently not supported in ES|QL. To force the query to be "
+                "evaluated in spite of the missing fields, pass set the "
+                "ignore_missing_fields=True option in the esql_execute() call."
+            )
+        non_doc_fields: set[str] = set(query_columns) - doc_fields - {"_id"}
+        index_id = query_columns.index("_id")
+
+        results = response.body.get("values", [])
+        for column_values in results:
+            # create a dictionary with all the document fields, expanding the
+            # dot notation returned by ES|QL into the recursive dictionaries
+            # used by Document.from_dict()
+            doc_dict: Dict[str, Any] = {}
+            for col, val in zip(query_columns, column_values):
+                if col in doc_fields:
+                    cols = col.split(".")
+                    d = doc_dict
+                    for c in cols[:-1]:
+                        if c not in d:
+                            d[c] = {}
+                        d = d[c]
+                    d[cols[-1]] = val
+
+            # create the document instance
+            obj = cls(meta={"_id": column_values[index_id]})
+            obj._from_dict(doc_dict)
+
+            if return_additional:
+                # build a dict with any other values included in the response
+                other = {
+                    col: val
+                    for col, val in zip(query_columns, column_values)
+                    if col in non_doc_fields
+                }
+
+                yield obj, other
+            else:
+                yield obj
diff -pruN 9.1.0-3/elasticsearch/dsl/_sync/document.py 9.1.1-1/elasticsearch/dsl/_sync/document.py
--- 9.1.0-3/elasticsearch/dsl/_sync/document.py	2025-07-30 08:51:18.000000000 +0000
+++ 9.1.1-1/elasticsearch/dsl/_sync/document.py	2025-09-12 13:23:45.000000000 +0000
@@ -21,6 +21,7 @@ from typing import (
     Any,
     Dict,
     Iterable,
+    Iterator,
     List,
     Optional,
     Tuple,
@@ -42,6 +43,7 @@ from .search import Search
 
 if TYPE_CHECKING:
     from elasticsearch import Elasticsearch
+    from elasticsearch.esql.esql import ESQLBase
 
 
 class IndexMeta(DocumentMeta):
@@ -512,3 +514,85 @@ class Document(DocumentBase, metaclass=I
                 return action
 
         return bulk(es, Generate(actions), **kwargs)
+
+    @classmethod
+    def esql_execute(
+        cls,
+        query: "ESQLBase",
+        return_additional: bool = False,
+        ignore_missing_fields: bool = False,
+        using: Optional[UsingType] = None,
+        **kwargs: Any,
+    ) -> Iterator[Union[Self, Tuple[Self, Dict[str, Any]]]]:
+        """
+        Execute the given ES|QL query and return an iterator of 2-element tuples,
+        where the first element is an instance of this ``Document`` and the
+        second a dictionary with any remaining columns requested in the query.
+
+        :arg query: an ES|QL query object created with the ``esql_from()`` method.
+        :arg return_additional: if ``False`` (the default), this method returns
+            document objects. If set to ``True``, the method returns tuples with
+            a document in the first element and a dictionary with any additional
+            columns returned by the query in the second element.
+        :arg ignore_missing_fields: if ``False`` (the default), all the fields of
+            the document must be present in the query, or else an exception is
+            raised. Set to ``True`` to allow missing fields, which will result in
+            partially initialized document objects.
+        :arg using: connection alias to use, defaults to ``'default'``
+        :arg kwargs: additional options for the ``client.esql.query()`` function.
+        """
+        es = cls._get_connection(using)
+        response = es.esql.query(query=str(query), **kwargs)
+        query_columns = [col["name"] for col in response.body.get("columns", [])]
+
+        # Here we get the list of columns defined in the document, which are the
+        # columns that we will take from each result to assemble the document
+        # object.
+        # When `for_esql=False` is passed below by default, the list will include
+        # nested fields, which ES|QL does not return, causing an error. When passing
+        # `ignore_missing_fields=True` the list will be generated with
+        # `for_esql=True`, so the error will not occur, but the documents will
+        # not have any Nested objects in them.
+        doc_fields = set(cls._get_field_names(for_esql=ignore_missing_fields))
+        if not ignore_missing_fields and not doc_fields.issubset(set(query_columns)):
+            raise ValueError(
+                f"Not all fields of {cls.__name__} were returned by the query. "
+                "Make sure your document does not use Nested fields, which are "
+                "currently not supported in ES|QL. To force the query to be "
+                "evaluated in spite of the missing fields, pass set the "
+                "ignore_missing_fields=True option in the esql_execute() call."
+            )
+        non_doc_fields: set[str] = set(query_columns) - doc_fields - {"_id"}
+        index_id = query_columns.index("_id")
+
+        results = response.body.get("values", [])
+        for column_values in results:
+            # create a dictionary with all the document fields, expanding the
+            # dot notation returned by ES|QL into the recursive dictionaries
+            # used by Document.from_dict()
+            doc_dict: Dict[str, Any] = {}
+            for col, val in zip(query_columns, column_values):
+                if col in doc_fields:
+                    cols = col.split(".")
+                    d = doc_dict
+                    for c in cols[:-1]:
+                        if c not in d:
+                            d[c] = {}
+                        d = d[c]
+                    d[cols[-1]] = val
+
+            # create the document instance
+            obj = cls(meta={"_id": column_values[index_id]})
+            obj._from_dict(doc_dict)
+
+            if return_additional:
+                # build a dict with any other values included in the response
+                other = {
+                    col: val
+                    for col, val in zip(query_columns, column_values)
+                    if col in non_doc_fields
+                }
+
+                yield obj, other
+            else:
+                yield obj
diff -pruN 9.1.0-3/elasticsearch/dsl/document_base.py 9.1.1-1/elasticsearch/dsl/document_base.py
--- 9.1.0-3/elasticsearch/dsl/document_base.py	2025-07-30 08:51:18.000000000 +0000
+++ 9.1.1-1/elasticsearch/dsl/document_base.py	2025-09-12 13:23:45.000000000 +0000
@@ -49,6 +49,7 @@ from .utils import DOC_META_FIELDS, Obje
 if TYPE_CHECKING:
     from elastic_transport import ObjectApiResponse
 
+    from ..esql.esql import ESQLBase
     from .index_base import IndexBase
 
 
@@ -602,3 +603,44 @@ class DocumentBase(ObjectBase):
 
         meta["_source"] = d
         return meta
+
+    @classmethod
+    def _get_field_names(
+        cls, for_esql: bool = False, nested_class: Optional[type[InnerDoc]] = None
+    ) -> List[str]:
+        """Return the list of field names used by this document.
+        If the document has nested objects, their fields are reported using dot
+        notation. If the ``for_esql`` argument is set to ``True``, the list omits
+        nested fields, which are currently unsupported in ES|QL.
+        """
+        fields = []
+        class_ = nested_class or cls
+        for field_name in class_._doc_type.mapping:
+            field = class_._doc_type.mapping[field_name]
+            if isinstance(field, Object):
+                if for_esql and isinstance(field, Nested):
+                    # ES|QL does not recognize Nested fields at this time
+                    continue
+                sub_fields = cls._get_field_names(
+                    for_esql=for_esql, nested_class=field._doc_class
+                )
+                for sub_field in sub_fields:
+                    fields.append(f"{field_name}.{sub_field}")
+            else:
+                fields.append(field_name)
+        return fields
+
+    @classmethod
+    def esql_from(cls) -> "ESQLBase":
+        """Return a base ES|QL query for instances of this document class.
+
+        The returned query is initialized with ``FROM`` and ``KEEP`` statements,
+        and can be completed as desired.
+        """
+        from ..esql import ESQL  # here to avoid circular imports
+
+        return (
+            ESQL.from_(cls)
+            .metadata("_id")
+            .keep("_id", *tuple(cls._get_field_names(for_esql=True)))
+        )
diff -pruN 9.1.0-3/elasticsearch/dsl/field.py 9.1.1-1/elasticsearch/dsl/field.py
--- 9.1.0-3/elasticsearch/dsl/field.py	2025-07-30 08:51:18.000000000 +0000
+++ 9.1.1-1/elasticsearch/dsl/field.py	2025-09-12 13:23:45.000000000 +0000
@@ -119,9 +119,16 @@ class Field(DslBase):
     def __getitem__(self, subfield: str) -> "Field":
         return cast(Field, self._params.get("fields", {})[subfield])
 
-    def _serialize(self, data: Any) -> Any:
+    def _serialize(self, data: Any, skip_empty: bool) -> Any:
         return data
 
+    def _safe_serialize(self, data: Any, skip_empty: bool) -> Any:
+        try:
+            return self._serialize(data, skip_empty)
+        except TypeError:
+            # older method signature, without skip_empty
+            return self._serialize(data)  # type: ignore[call-arg]
+
     def _deserialize(self, data: Any) -> Any:
         return data
 
@@ -133,10 +140,16 @@ class Field(DslBase):
             return AttrList([])
         return self._empty()
 
-    def serialize(self, data: Any) -> Any:
+    def serialize(self, data: Any, skip_empty: bool = True) -> Any:
         if isinstance(data, (list, AttrList, tuple)):
-            return list(map(self._serialize, cast(Iterable[Any], data)))
-        return self._serialize(data)
+            return list(
+                map(
+                    self._safe_serialize,
+                    cast(Iterable[Any], data),
+                    [skip_empty] * len(data),
+                )
+            )
+        return self._safe_serialize(data, skip_empty)
 
     def deserialize(self, data: Any) -> Any:
         if isinstance(data, (list, AttrList, tuple)):
@@ -186,7 +199,7 @@ class RangeField(Field):
         data = {k: self._core_field.deserialize(v) for k, v in data.items()}  # type: ignore[union-attr]
         return Range(data)
 
-    def _serialize(self, data: Any) -> Optional[Dict[str, Any]]:
+    def _serialize(self, data: Any, skip_empty: bool) -> Optional[Dict[str, Any]]:
         if data is None:
             return None
         if not isinstance(data, collections.abc.Mapping):
@@ -550,7 +563,7 @@ class Object(Field):
         return self._wrap(data)
 
     def _serialize(
-        self, data: Optional[Union[Dict[str, Any], "InnerDoc"]]
+        self, data: Optional[Union[Dict[str, Any], "InnerDoc"]], skip_empty: bool
     ) -> Optional[Dict[str, Any]]:
         if data is None:
             return None
@@ -559,7 +572,7 @@ class Object(Field):
         if isinstance(data, collections.abc.Mapping):
             return data
 
-        return data.to_dict()
+        return data.to_dict(skip_empty=skip_empty)
 
     def clean(self, data: Any) -> Any:
         data = super().clean(data)
@@ -768,7 +781,7 @@ class Binary(Field):
     def _deserialize(self, data: Any) -> bytes:
         return base64.b64decode(data)
 
-    def _serialize(self, data: Any) -> Optional[str]:
+    def _serialize(self, data: Any, skip_empty: bool) -> Optional[str]:
         if data is None:
             return None
         return base64.b64encode(data).decode()
@@ -2619,7 +2632,7 @@ class Ip(Field):
         # the ipaddress library for pypy only accepts unicode.
         return ipaddress.ip_address(unicode(data))
 
-    def _serialize(self, data: Any) -> Optional[str]:
+    def _serialize(self, data: Any, skip_empty: bool) -> Optional[str]:
         if data is None:
             return None
         return str(data)
@@ -3367,7 +3380,7 @@ class Percolator(Field):
     def _deserialize(self, data: Any) -> "Query":
         return Q(data)  # type: ignore[no-any-return]
 
-    def _serialize(self, data: Any) -> Optional[Dict[str, Any]]:
+    def _serialize(self, data: Any, skip_empty: bool) -> Optional[Dict[str, Any]]:
         if data is None:
             return None
         return data.to_dict()  # type: ignore[no-any-return]
diff -pruN 9.1.0-3/elasticsearch/dsl/response/aggs.py 9.1.1-1/elasticsearch/dsl/response/aggs.py
--- 9.1.0-3/elasticsearch/dsl/response/aggs.py	2025-07-30 08:51:18.000000000 +0000
+++ 9.1.1-1/elasticsearch/dsl/response/aggs.py	2025-09-12 13:23:45.000000000 +0000
@@ -63,7 +63,7 @@ class BucketData(AggResponse[_R]):
         )
 
     def __iter__(self) -> Iterator["Agg"]:  # type: ignore[override]
-        return iter(self.buckets)  # type: ignore[arg-type]
+        return iter(self.buckets)
 
     def __len__(self) -> int:
         return len(self.buckets)
diff -pruN 9.1.0-3/elasticsearch/dsl/types.py 9.1.1-1/elasticsearch/dsl/types.py
--- 9.1.0-3/elasticsearch/dsl/types.py	2025-07-30 08:51:18.000000000 +0000
+++ 9.1.1-1/elasticsearch/dsl/types.py	2025-09-12 13:23:45.000000000 +0000
@@ -144,12 +144,29 @@ class ChiSquareHeuristic(AttrDict[Any]):
 
 class ChunkingSettings(AttrDict[Any]):
     """
-    :arg strategy: (required) The chunking strategy: `sentence` or `word`.
-        Defaults to `sentence` if omitted.
+    :arg strategy: (required) The chunking strategy: `sentence`, `word`,
+        `none` or `recursive`.   * If `strategy` is set to `recursive`,
+        you must also specify:  - `max_chunk_size` - either `separators`
+        or`separator_group`  Learn more about different chunking
+        strategies in the linked documentation. Defaults to `sentence` if
+        omitted.
     :arg max_chunk_size: (required) The maximum size of a chunk in words.
         This value cannot be higher than `300` or lower than `20` (for
         `sentence` strategy) or `10` (for `word` strategy). Defaults to
         `250` if omitted.
+    :arg separator_group: Only applicable to the `recursive` strategy and
+        required when using it.  Sets a predefined list of separators in
+        the saved chunking settings based on the selected text type.
+        Values can be `markdown` or `plaintext`.  Using this parameter is
+        an alternative to manually specifying a custom `separators` list.
+    :arg separators: Only applicable to the `recursive` strategy and
+        required when using it.  A list of strings used as possible split
+        points when chunking text.  Each string can be a plain string or a
+        regular expression (regex) pattern. The system tries each
+        separator in order to split the text, starting from the first item
+        in the list.  After splitting, it attempts to recombine smaller
+        pieces into larger chunks that stay within the `max_chunk_size`
+        limit, to reduce the total number of chunks generated.
     :arg overlap: The number of overlapping words for chunks. It is
         applicable only to a `word` chunking strategy. This value cannot
         be higher than half the `max_chunk_size` value. Defaults to `100`
@@ -161,6 +178,8 @@ class ChunkingSettings(AttrDict[Any]):
 
     strategy: Union[str, DefaultType]
     max_chunk_size: Union[int, DefaultType]
+    separator_group: Union[str, DefaultType]
+    separators: Union[Sequence[str], DefaultType]
     overlap: Union[int, DefaultType]
     sentence_overlap: Union[int, DefaultType]
 
@@ -169,6 +188,8 @@ class ChunkingSettings(AttrDict[Any]):
         *,
         strategy: Union[str, DefaultType] = DEFAULT,
         max_chunk_size: Union[int, DefaultType] = DEFAULT,
+        separator_group: Union[str, DefaultType] = DEFAULT,
+        separators: Union[Sequence[str], DefaultType] = DEFAULT,
         overlap: Union[int, DefaultType] = DEFAULT,
         sentence_overlap: Union[int, DefaultType] = DEFAULT,
         **kwargs: Any,
@@ -177,6 +198,10 @@ class ChunkingSettings(AttrDict[Any]):
             kwargs["strategy"] = strategy
         if max_chunk_size is not DEFAULT:
             kwargs["max_chunk_size"] = max_chunk_size
+        if separator_group is not DEFAULT:
+            kwargs["separator_group"] = separator_group
+        if separators is not DEFAULT:
+            kwargs["separators"] = separators
         if overlap is not DEFAULT:
             kwargs["overlap"] = overlap
         if sentence_overlap is not DEFAULT:
@@ -4523,7 +4548,7 @@ class ArrayPercentilesItem(AttrDict[Any]
     :arg value_as_string:
     """
 
-    key: str
+    key: float
     value: Union[float, None]
     value_as_string: str
 
@@ -5369,7 +5394,9 @@ class HdrPercentileRanksAggregate(AttrDi
     :arg meta:
     """
 
-    values: Union[Mapping[str, Union[str, int, None]], Sequence["ArrayPercentilesItem"]]
+    values: Union[
+        Mapping[str, Union[str, float, None]], Sequence["ArrayPercentilesItem"]
+    ]
     meta: Mapping[str, Any]
 
 
@@ -5379,7 +5406,9 @@ class HdrPercentilesAggregate(AttrDict[A
     :arg meta:
     """
 
-    values: Union[Mapping[str, Union[str, int, None]], Sequence["ArrayPercentilesItem"]]
+    values: Union[
+        Mapping[str, Union[str, float, None]], Sequence["ArrayPercentilesItem"]
+    ]
     meta: Mapping[str, Any]
 
 
@@ -5886,7 +5915,9 @@ class PercentilesBucketAggregate(AttrDic
     :arg meta:
     """
 
-    values: Union[Mapping[str, Union[str, int, None]], Sequence["ArrayPercentilesItem"]]
+    values: Union[
+        Mapping[str, Union[str, float, None]], Sequence["ArrayPercentilesItem"]
+    ]
     meta: Mapping[str, Any]
 
 
@@ -6087,17 +6118,19 @@ class SearchProfile(AttrDict[Any]):
 class ShardFailure(AttrDict[Any]):
     """
     :arg reason: (required)
-    :arg shard: (required)
     :arg index:
     :arg node:
+    :arg shard:
     :arg status:
+    :arg primary:
     """
 
     reason: "ErrorCause"
-    shard: int
     index: str
     node: str
+    shard: int
     status: str
+    primary: bool
 
 
 class ShardProfile(AttrDict[Any]):
@@ -6421,7 +6454,9 @@ class TDigestPercentileRanksAggregate(At
     :arg meta:
     """
 
-    values: Union[Mapping[str, Union[str, int, None]], Sequence["ArrayPercentilesItem"]]
+    values: Union[
+        Mapping[str, Union[str, float, None]], Sequence["ArrayPercentilesItem"]
+    ]
     meta: Mapping[str, Any]
 
 
@@ -6431,7 +6466,9 @@ class TDigestPercentilesAggregate(AttrDi
     :arg meta:
     """
 
-    values: Union[Mapping[str, Union[str, int, None]], Sequence["ArrayPercentilesItem"]]
+    values: Union[
+        Mapping[str, Union[str, float, None]], Sequence["ArrayPercentilesItem"]
+    ]
     meta: Mapping[str, Any]
 
 
diff -pruN 9.1.0-3/elasticsearch/dsl/utils.py 9.1.1-1/elasticsearch/dsl/utils.py
--- 9.1.0-3/elasticsearch/dsl/utils.py	2025-07-30 08:51:18.000000000 +0000
+++ 9.1.1-1/elasticsearch/dsl/utils.py	2025-09-12 13:23:45.000000000 +0000
@@ -603,7 +603,7 @@ class ObjectBase(AttrDict[Any]):
             # if this is a mapped field,
             f = self.__get_field(k)
             if f and f._coerce:
-                v = f.serialize(v)
+                v = f.serialize(v, skip_empty=skip_empty)
 
             # if someone assigned AttrList, unwrap it
             if isinstance(v, AttrList):
diff -pruN 9.1.0-3/elasticsearch/esql/__init__.py 9.1.1-1/elasticsearch/esql/__init__.py
--- 9.1.0-3/elasticsearch/esql/__init__.py	2025-07-30 08:51:18.000000000 +0000
+++ 9.1.1-1/elasticsearch/esql/__init__.py	2025-09-12 13:23:45.000000000 +0000
@@ -15,4 +15,5 @@
 #  specific language governing permissions and limitations
 #  under the License.
 
-from .esql import ESQL, and_, not_, or_  # noqa: F401
+from ..dsl import E  # noqa: F401
+from .esql import ESQL, ESQLBase, and_, not_, or_  # noqa: F401
diff -pruN 9.1.0-3/elasticsearch/esql/esql.py 9.1.1-1/elasticsearch/esql/esql.py
--- 9.1.0-3/elasticsearch/esql/esql.py	2025-07-30 08:51:18.000000000 +0000
+++ 9.1.1-1/elasticsearch/esql/esql.py	2025-09-12 13:23:45.000000000 +0000
@@ -16,6 +16,7 @@
 #  under the License.
 
 import json
+import re
 from abc import ABC, abstractmethod
 from typing import Any, Dict, Optional, Tuple, Type, Union
 
@@ -111,6 +112,29 @@ class ESQLBase(ABC):
     def _render_internal(self) -> str:
         pass
 
+    @staticmethod
+    def _format_index(index: IndexType) -> str:
+        return index._index._name if hasattr(index, "_index") else str(index)
+
+    @staticmethod
+    def _format_id(id: FieldType, allow_patterns: bool = False) -> str:
+        s = str(id)  # in case it is an InstrumentedField
+        if allow_patterns and "*" in s:
+            return s  # patterns cannot be escaped
+        if re.fullmatch(r"[a-zA-Z_@][a-zA-Z0-9_\.]*", s):
+            return s
+        # this identifier needs to be escaped
+        s.replace("`", "``")
+        return f"`{s}`"
+
+    @staticmethod
+    def _format_expr(expr: ExpressionType) -> str:
+        return (
+            json.dumps(expr)
+            if not isinstance(expr, (str, InstrumentedExpression))
+            else str(expr)
+        )
+
     def _is_forked(self) -> bool:
         if self.__class__.__name__ == "Fork":
             return True
@@ -427,7 +451,7 @@ class ESQLBase(ABC):
         """
         return Sample(self, probability)
 
-    def sort(self, *columns: FieldType) -> "Sort":
+    def sort(self, *columns: ExpressionType) -> "Sort":
         """The ``SORT`` processing command sorts a table on one or more columns.
 
         :param columns: The columns to sort on.
@@ -570,15 +594,12 @@ class From(ESQLBase):
         return self
 
     def _render_internal(self) -> str:
-        indices = [
-            index if isinstance(index, str) else index._index._name
-            for index in self._indices
-        ]
+        indices = [self._format_index(index) for index in self._indices]
         s = f'{self.__class__.__name__.upper()} {", ".join(indices)}'
         if self._metadata_fields:
             s = (
                 s
-                + f' METADATA {", ".join([str(field) for field in self._metadata_fields])}'
+                + f' METADATA {", ".join([self._format_id(field) for field in self._metadata_fields])}'
             )
         return s
 
@@ -594,7 +615,11 @@ class Row(ESQLBase):
     def __init__(self, **params: ExpressionType):
         super().__init__()
         self._params = {
-            k: json.dumps(v) if not isinstance(v, InstrumentedExpression) else v
+            self._format_id(k): (
+                json.dumps(v)
+                if not isinstance(v, InstrumentedExpression)
+                else self._format_expr(v)
+            )
             for k, v in params.items()
         }
 
@@ -615,7 +640,7 @@ class Show(ESQLBase):
         self._item = item
 
     def _render_internal(self) -> str:
-        return f"SHOW {self._item}"
+        return f"SHOW {self._format_id(self._item)}"
 
 
 class Branch(ESQLBase):
@@ -667,11 +692,11 @@ class ChangePoint(ESQLBase):
         return self
 
     def _render_internal(self) -> str:
-        key = "" if not self._key else f" ON {self._key}"
+        key = "" if not self._key else f" ON {self._format_id(self._key)}"
         names = (
             ""
             if not self._type_name and not self._pvalue_name
-            else f' AS {self._type_name or "type"}, {self._pvalue_name or "pvalue"}'
+            else f' AS {self._format_id(self._type_name or "type")}, {self._format_id(self._pvalue_name or "pvalue")}'
         )
         return f"CHANGE_POINT {self._value}{key}{names}"
 
@@ -709,12 +734,13 @@ class Completion(ESQLBase):
     def _render_internal(self) -> str:
         if self._inference_id is None:
             raise ValueError("The completion command requires an inference ID")
+        with_ = {"inference_id": self._inference_id}
         if self._named_prompt:
             column = list(self._named_prompt.keys())[0]
             prompt = list(self._named_prompt.values())[0]
-            return f"COMPLETION {column} = {prompt} WITH {self._inference_id}"
+            return f"COMPLETION {self._format_id(column)} = {self._format_id(prompt)} WITH {json.dumps(with_)}"
         else:
-            return f"COMPLETION {self._prompt[0]} WITH {self._inference_id}"
+            return f"COMPLETION {self._format_id(self._prompt[0])} WITH {json.dumps(with_)}"
 
 
 class Dissect(ESQLBase):
@@ -742,9 +768,13 @@ class Dissect(ESQLBase):
 
     def _render_internal(self) -> str:
         sep = (
-            "" if self._separator is None else f' APPEND_SEPARATOR="{self._separator}"'
+            ""
+            if self._separator is None
+            else f" APPEND_SEPARATOR={json.dumps(self._separator)}"
+        )
+        return (
+            f"DISSECT {self._format_id(self._input)} {json.dumps(self._pattern)}{sep}"
         )
-        return f"DISSECT {self._input} {json.dumps(self._pattern)}{sep}"
 
 
 class Drop(ESQLBase):
@@ -760,7 +790,7 @@ class Drop(ESQLBase):
         self._columns = columns
 
     def _render_internal(self) -> str:
-        return f'DROP {", ".join([str(col) for col in self._columns])}'
+        return f'DROP {", ".join([self._format_id(col, allow_patterns=True) for col in self._columns])}'
 
 
 class Enrich(ESQLBase):
@@ -814,12 +844,18 @@ class Enrich(ESQLBase):
         return self
 
     def _render_internal(self) -> str:
-        on = "" if self._match_field is None else f" ON {self._match_field}"
+        on = (
+            ""
+            if self._match_field is None
+            else f" ON {self._format_id(self._match_field)}"
+        )
         with_ = ""
         if self._named_fields:
-            with_ = f' WITH {", ".join([f"{name} = {field}" for name, field in self._named_fields.items()])}'
+            with_ = f' WITH {", ".join([f"{self._format_id(name)} = {self._format_id(field)}" for name, field in self._named_fields.items()])}'
         elif self._fields is not None:
-            with_ = f' WITH {", ".join([str(field) for field in self._fields])}'
+            with_ = (
+                f' WITH {", ".join([self._format_id(field) for field in self._fields])}'
+            )
         return f"ENRICH {self._policy}{on}{with_}"
 
 
@@ -832,7 +868,10 @@ class Eval(ESQLBase):
     """
 
     def __init__(
-        self, parent: ESQLBase, *columns: FieldType, **named_columns: FieldType
+        self,
+        parent: ESQLBase,
+        *columns: ExpressionType,
+        **named_columns: ExpressionType,
     ):
         if columns and named_columns:
             raise ValueError(
@@ -844,10 +883,13 @@ class Eval(ESQLBase):
     def _render_internal(self) -> str:
         if isinstance(self._columns, dict):
             cols = ", ".join(
-                [f"{name} = {value}" for name, value in self._columns.items()]
+                [
+                    f"{self._format_id(name)} = {self._format_expr(value)}"
+                    for name, value in self._columns.items()
+                ]
             )
         else:
-            cols = ", ".join([f"{col}" for col in self._columns])
+            cols = ", ".join([f"{self._format_expr(col)}" for col in self._columns])
         return f"EVAL {cols}"
 
 
@@ -900,7 +942,7 @@ class Grok(ESQLBase):
         self._pattern = pattern
 
     def _render_internal(self) -> str:
-        return f"GROK {self._input} {json.dumps(self._pattern)}"
+        return f"GROK {self._format_id(self._input)} {json.dumps(self._pattern)}"
 
 
 class Keep(ESQLBase):
@@ -916,7 +958,7 @@ class Keep(ESQLBase):
         self._columns = columns
 
     def _render_internal(self) -> str:
-        return f'KEEP {", ".join([f"{col}" for col in self._columns])}'
+        return f'KEEP {", ".join([f"{self._format_id(col, allow_patterns=True)}" for col in self._columns])}'
 
 
 class Limit(ESQLBase):
@@ -932,7 +974,7 @@ class Limit(ESQLBase):
         self._max_number_of_rows = max_number_of_rows
 
     def _render_internal(self) -> str:
-        return f"LIMIT {self._max_number_of_rows}"
+        return f"LIMIT {json.dumps(self._max_number_of_rows)}"
 
 
 class LookupJoin(ESQLBase):
@@ -967,7 +1009,9 @@ class LookupJoin(ESQLBase):
             if isinstance(self._lookup_index, str)
             else self._lookup_index._index._name
         )
-        return f"LOOKUP JOIN {index} ON {self._field}"
+        return (
+            f"LOOKUP JOIN {self._format_index(index)} ON {self._format_id(self._field)}"
+        )
 
 
 class MvExpand(ESQLBase):
@@ -983,7 +1027,7 @@ class MvExpand(ESQLBase):
         self._column = column
 
     def _render_internal(self) -> str:
-        return f"MV_EXPAND {self._column}"
+        return f"MV_EXPAND {self._format_id(self._column)}"
 
 
 class Rename(ESQLBase):
@@ -999,7 +1043,7 @@ class Rename(ESQLBase):
         self._columns = columns
 
     def _render_internal(self) -> str:
-        return f'RENAME {", ".join([f"{old_name} AS {new_name}" for old_name, new_name in self._columns.items()])}'
+        return f'RENAME {", ".join([f"{self._format_id(old_name)} AS {self._format_id(new_name)}" for old_name, new_name in self._columns.items()])}'
 
 
 class Sample(ESQLBase):
@@ -1015,7 +1059,7 @@ class Sample(ESQLBase):
         self._probability = probability
 
     def _render_internal(self) -> str:
-        return f"SAMPLE {self._probability}"
+        return f"SAMPLE {json.dumps(self._probability)}"
 
 
 class Sort(ESQLBase):
@@ -1026,12 +1070,16 @@ class Sort(ESQLBase):
     in a single expression.
     """
 
-    def __init__(self, parent: ESQLBase, *columns: FieldType):
+    def __init__(self, parent: ESQLBase, *columns: ExpressionType):
         super().__init__(parent)
         self._columns = columns
 
     def _render_internal(self) -> str:
-        return f'SORT {", ".join([f"{col}" for col in self._columns])}'
+        sorts = [
+            " ".join([self._format_id(term) for term in str(col).split(" ")])
+            for col in self._columns
+        ]
+        return f'SORT {", ".join([f"{sort}" for sort in sorts])}'
 
 
 class Stats(ESQLBase):
@@ -1062,14 +1110,17 @@ class Stats(ESQLBase):
 
     def _render_internal(self) -> str:
         if isinstance(self._expressions, dict):
-            exprs = [f"{key} = {value}" for key, value in self._expressions.items()]
+            exprs = [
+                f"{self._format_id(key)} = {self._format_expr(value)}"
+                for key, value in self._expressions.items()
+            ]
         else:
-            exprs = [f"{expr}" for expr in self._expressions]
+            exprs = [f"{self._format_expr(expr)}" for expr in self._expressions]
         expression_separator = ",\n        "
         by = (
             ""
             if self._grouping_expressions is None
-            else f'\n        BY {", ".join([f"{expr}" for expr in self._grouping_expressions])}'
+            else f'\n        BY {", ".join([f"{self._format_expr(expr)}" for expr in self._grouping_expressions])}'
         )
         return f'STATS {expression_separator.join([f"{expr}" for expr in exprs])}{by}'
 
@@ -1087,7 +1138,7 @@ class Where(ESQLBase):
         self._expressions = expressions
 
     def _render_internal(self) -> str:
-        return f'WHERE {" AND ".join([f"{expr}" for expr in self._expressions])}'
+        return f'WHERE {" AND ".join([f"{self._format_expr(expr)}" for expr in self._expressions])}'
 
 
 def and_(*expressions: InstrumentedExpression) -> "InstrumentedExpression":
diff -pruN 9.1.0-3/elasticsearch/esql/functions.py 9.1.1-1/elasticsearch/esql/functions.py
--- 9.1.0-3/elasticsearch/esql/functions.py	2025-07-30 08:51:18.000000000 +0000
+++ 9.1.1-1/elasticsearch/esql/functions.py	2025-09-12 13:23:45.000000000 +0000
@@ -19,11 +19,15 @@ import json
 from typing import Any
 
 from elasticsearch.dsl.document_base import InstrumentedExpression
-from elasticsearch.esql.esql import ExpressionType
+from elasticsearch.esql.esql import ESQLBase, ExpressionType
 
 
 def _render(v: Any) -> str:
-    return json.dumps(v) if not isinstance(v, InstrumentedExpression) else str(v)
+    return (
+        json.dumps(v)
+        if not isinstance(v, InstrumentedExpression)
+        else ESQLBase._format_expr(v)
+    )
 
 
 def abs(number: ExpressionType) -> InstrumentedExpression:
@@ -69,7 +73,9 @@ def atan2(
     :param y_coordinate: y coordinate. If `null`, the function returns `null`.
     :param x_coordinate: x coordinate. If `null`, the function returns `null`.
     """
-    return InstrumentedExpression(f"ATAN2({y_coordinate}, {x_coordinate})")
+    return InstrumentedExpression(
+        f"ATAN2({_render(y_coordinate)}, {_render(x_coordinate)})"
+    )
 
 
 def avg(number: ExpressionType) -> InstrumentedExpression:
@@ -114,7 +120,7 @@ def bucket(
     :param to: End of the range. Can be a number, a date or a date expressed as a string.
     """
     return InstrumentedExpression(
-        f"BUCKET({_render(field)}, {_render(buckets)}, {from_}, {_render(to)})"
+        f"BUCKET({_render(field)}, {_render(buckets)}, {_render(from_)}, {_render(to)})"
     )
 
 
@@ -169,7 +175,7 @@ def cidr_match(ip: ExpressionType, block
     :param ip: IP address of type `ip` (both IPv4 and IPv6 are supported).
     :param block_x: CIDR block to test the IP against.
     """
-    return InstrumentedExpression(f"CIDR_MATCH({_render(ip)}, {block_x})")
+    return InstrumentedExpression(f"CIDR_MATCH({_render(ip)}, {_render(block_x)})")
 
 
 def coalesce(first: ExpressionType, rest: ExpressionType) -> InstrumentedExpression:
@@ -264,7 +270,7 @@ def date_diff(
     :param end_timestamp: A string representing an end timestamp
     """
     return InstrumentedExpression(
-        f"DATE_DIFF({_render(unit)}, {start_timestamp}, {end_timestamp})"
+        f"DATE_DIFF({_render(unit)}, {_render(start_timestamp)}, {_render(end_timestamp)})"
     )
 
 
@@ -285,7 +291,9 @@ def date_extract(
         the function returns `null`.
     :param date: Date expression. If `null`, the function returns `null`.
     """
-    return InstrumentedExpression(f"DATE_EXTRACT({date_part}, {_render(date)})")
+    return InstrumentedExpression(
+        f"DATE_EXTRACT({_render(date_part)}, {_render(date)})"
+    )
 
 
 def date_format(
@@ -301,7 +309,7 @@ def date_format(
     """
     if date_format is not None:
         return InstrumentedExpression(
-            f"DATE_FORMAT({json.dumps(date_format)}, {_render(date)})"
+            f"DATE_FORMAT({_render(date_format)}, {_render(date)})"
         )
     else:
         return InstrumentedExpression(f"DATE_FORMAT({_render(date)})")
@@ -317,7 +325,9 @@ def date_parse(
     :param date_string: Date expression as a string. If `null` or an empty
                         string, the function returns `null`.
     """
-    return InstrumentedExpression(f"DATE_PARSE({date_pattern}, {date_string})")
+    return InstrumentedExpression(
+        f"DATE_PARSE({_render(date_pattern)}, {_render(date_string)})"
+    )
 
 
 def date_trunc(
@@ -639,7 +649,7 @@ def min_over_time(field: ExpressionType)
 
 
 def multi_match(
-    query: ExpressionType, fields: ExpressionType, options: ExpressionType = None
+    query: ExpressionType, *fields: ExpressionType, options: ExpressionType = None
 ) -> InstrumentedExpression:
     """Use `MULTI_MATCH` to perform a multi-match query on the specified field.
     The multi_match query builds on the match query to allow multi-field queries.
@@ -651,11 +661,11 @@ def multi_match(
     """
     if options is not None:
         return InstrumentedExpression(
-            f"MULTI_MATCH({_render(query)}, {_render(fields)}, {_render(options)})"
+            f'MULTI_MATCH({_render(query)}, {", ".join([_render(c) for c in fields])}, {_render(options)})'
         )
     else:
         return InstrumentedExpression(
-            f"MULTI_MATCH({_render(query)}, {_render(fields)})"
+            f'MULTI_MATCH({_render(query)}, {", ".join([_render(c) for c in fields])})'
         )
 
 
@@ -929,7 +939,7 @@ def replace(
     :param new_string: Replacement string.
     """
     return InstrumentedExpression(
-        f"REPLACE({_render(string)}, {_render(regex)}, {new_string})"
+        f"REPLACE({_render(string)}, {_render(regex)}, {_render(new_string)})"
     )
 
 
@@ -1004,7 +1014,7 @@ def scalb(d: ExpressionType, scale_facto
     :param scale_factor: Numeric expression for the scale factor. If `null`, the
                          function returns `null`.
     """
-    return InstrumentedExpression(f"SCALB({_render(d)}, {scale_factor})")
+    return InstrumentedExpression(f"SCALB({_render(d)}, {_render(scale_factor)})")
 
 
 def sha1(input: ExpressionType) -> InstrumentedExpression:
@@ -1116,7 +1126,7 @@ def st_contains(
                    first. This means it is not possible to combine `geo_*` and
                    `cartesian_*` parameters.
     """
-    return InstrumentedExpression(f"ST_CONTAINS({geom_a}, {geom_b})")
+    return InstrumentedExpression(f"ST_CONTAINS({_render(geom_a)}, {_render(geom_b)})")
 
 
 def st_disjoint(
@@ -1135,7 +1145,7 @@ def st_disjoint(
                    first. This means it is not possible to combine `geo_*` and
                    `cartesian_*` parameters.
     """
-    return InstrumentedExpression(f"ST_DISJOINT({geom_a}, {geom_b})")
+    return InstrumentedExpression(f"ST_DISJOINT({_render(geom_a)}, {_render(geom_b)})")
 
 
 def st_distance(
@@ -1153,7 +1163,7 @@ def st_distance(
                    also have the same coordinate system as the first. This means it
                    is not possible to combine `geo_point` and `cartesian_point` parameters.
     """
-    return InstrumentedExpression(f"ST_DISTANCE({geom_a}, {geom_b})")
+    return InstrumentedExpression(f"ST_DISTANCE({_render(geom_a)}, {_render(geom_b)})")
 
 
 def st_envelope(geometry: ExpressionType) -> InstrumentedExpression:
@@ -1208,7 +1218,7 @@ def st_geohash_to_long(grid_id: Expressi
     :param grid_id: Input geohash grid-id. The input can be a single- or
                     multi-valued column or an expression.
     """
-    return InstrumentedExpression(f"ST_GEOHASH_TO_LONG({grid_id})")
+    return InstrumentedExpression(f"ST_GEOHASH_TO_LONG({_render(grid_id)})")
 
 
 def st_geohash_to_string(grid_id: ExpressionType) -> InstrumentedExpression:
@@ -1218,7 +1228,7 @@ def st_geohash_to_string(grid_id: Expres
     :param grid_id: Input geohash grid-id. The input can be a single- or
                     multi-valued column or an expression.
     """
-    return InstrumentedExpression(f"ST_GEOHASH_TO_STRING({grid_id})")
+    return InstrumentedExpression(f"ST_GEOHASH_TO_STRING({_render(grid_id)})")
 
 
 def st_geohex(
@@ -1254,7 +1264,7 @@ def st_geohex_to_long(grid_id: Expressio
     :param grid_id: Input geohex grid-id. The input can be a single- or
                     multi-valued column or an expression.
     """
-    return InstrumentedExpression(f"ST_GEOHEX_TO_LONG({grid_id})")
+    return InstrumentedExpression(f"ST_GEOHEX_TO_LONG({_render(grid_id)})")
 
 
 def st_geohex_to_string(grid_id: ExpressionType) -> InstrumentedExpression:
@@ -1264,7 +1274,7 @@ def st_geohex_to_string(grid_id: Express
     :param grid_id: Input Geohex grid-id. The input can be a single- or
                     multi-valued column or an expression.
     """
-    return InstrumentedExpression(f"ST_GEOHEX_TO_STRING({grid_id})")
+    return InstrumentedExpression(f"ST_GEOHEX_TO_STRING({_render(grid_id)})")
 
 
 def st_geotile(
@@ -1300,7 +1310,7 @@ def st_geotile_to_long(grid_id: Expressi
     :param grid_id: Input geotile grid-id. The input can be a single- or
                     multi-valued column or an expression.
     """
-    return InstrumentedExpression(f"ST_GEOTILE_TO_LONG({grid_id})")
+    return InstrumentedExpression(f"ST_GEOTILE_TO_LONG({_render(grid_id)})")
 
 
 def st_geotile_to_string(grid_id: ExpressionType) -> InstrumentedExpression:
@@ -1310,7 +1320,7 @@ def st_geotile_to_string(grid_id: Expres
     :param grid_id: Input geotile grid-id. The input can be a single- or
                     multi-valued column or an expression.
     """
-    return InstrumentedExpression(f"ST_GEOTILE_TO_STRING({grid_id})")
+    return InstrumentedExpression(f"ST_GEOTILE_TO_STRING({_render(grid_id)})")
 
 
 def st_intersects(
@@ -1330,7 +1340,9 @@ def st_intersects(
                    first. This means it is not possible to combine `geo_*` and
                    `cartesian_*` parameters.
     """
-    return InstrumentedExpression(f"ST_INTERSECTS({geom_a}, {geom_b})")
+    return InstrumentedExpression(
+        f"ST_INTERSECTS({_render(geom_a)}, {_render(geom_b)})"
+    )
 
 
 def st_within(geom_a: ExpressionType, geom_b: ExpressionType) -> InstrumentedExpression:
@@ -1346,7 +1358,7 @@ def st_within(geom_a: ExpressionType, ge
                    first. This means it is not possible to combine `geo_*` and
                    `cartesian_*` parameters.
     """
-    return InstrumentedExpression(f"ST_WITHIN({geom_a}, {geom_b})")
+    return InstrumentedExpression(f"ST_WITHIN({_render(geom_a)}, {_render(geom_b)})")
 
 
 def st_x(point: ExpressionType) -> InstrumentedExpression:
diff -pruN 9.1.0-3/examples/dsl/async/esql_employees.py 9.1.1-1/examples/dsl/async/esql_employees.py
--- 9.1.0-3/examples/dsl/async/esql_employees.py	1970-01-01 00:00:00.000000000 +0000
+++ 9.1.1-1/examples/dsl/async/esql_employees.py	2025-09-12 13:23:45.000000000 +0000
@@ -0,0 +1,170 @@
+#  Licensed to Elasticsearch B.V. under one or more contributor
+#  license agreements. See the NOTICE file distributed with
+#  this work for additional information regarding copyright
+#  ownership. Elasticsearch B.V. licenses this file to you under
+#  the Apache License, Version 2.0 (the "License"); you may
+#  not use this file except in compliance with the License.
+#  You may obtain a copy of the License at
+#
+# 	http://www.apache.org/licenses/LICENSE-2.0
+#
+#  Unless required by applicable law or agreed to in writing,
+#  software distributed under the License is distributed on an
+#  "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+#  KIND, either express or implied.  See the License for the
+#  specific language governing permissions and limitations
+#  under the License.
+
+"""
+# ES|QL query builder example
+
+Requirements:
+
+$ pip install "elasticsearch[async]" faker
+
+To run the example:
+
+$ python esql_employees.py "name to search"
+
+The index will be created automatically with a list of 1000 randomly generated
+employees if it does not exist. Add `--recreate-index` or `-r` to the command
+to regenerate it.
+
+Examples:
+
+$ python esql_employees "Mark"  # employees named Mark (first or last names)
+$ python esql_employees "Sarah" --limit 10  # up to 10 employees named Sarah
+$ python esql_employees "Sam" --sort height  # sort results by height
+$ python esql_employees "Sam" --sort name  # sort results by last name
+"""
+
+import argparse
+import asyncio
+import os
+import random
+
+from faker import Faker
+
+from elasticsearch.dsl import AsyncDocument, InnerDoc, M, async_connections
+from elasticsearch.esql import ESQLBase
+from elasticsearch.esql.functions import concat, multi_match
+
+fake = Faker()
+
+
+class Address(InnerDoc):
+    address: M[str]
+    city: M[str]
+    zip_code: M[str]
+
+
+class Employee(AsyncDocument):
+    emp_no: M[int]
+    first_name: M[str]
+    last_name: M[str]
+    height: M[float]
+    still_hired: M[bool]
+    address: M[Address]
+
+    class Index:
+        name = "employees"
+
+    @property
+    def name(self) -> str:
+        return f"{self.first_name} {self.last_name}"
+
+    def __repr__(self) -> str:
+        return f"<Employee[{self.meta.id}]: {self.first_name} {self.last_name}>"
+
+
+async def create(num_employees: int = 1000) -> None:
+    print("Creating a new employee index...")
+    if await Employee._index.exists():
+        await Employee._index.delete()
+    await Employee.init()
+
+    for i in range(num_employees):
+        address = Address(
+            address=fake.address(), city=fake.city(), zip_code=fake.zipcode()
+        )
+        emp = Employee(
+            emp_no=10000 + i,
+            first_name=fake.first_name(),
+            last_name=fake.last_name(),
+            height=int((random.random() * 0.8 + 1.5) * 1000) / 1000,
+            still_hired=random.random() >= 0.5,
+            address=address,
+        )
+        await emp.save()
+    await Employee._index.refresh()
+
+
+async def search(query: str, limit: int, sort: str) -> None:
+    q: ESQLBase = (
+        Employee.esql_from()
+        .where(multi_match(query, Employee.first_name, Employee.last_name))
+        .eval(full_name=concat(Employee.first_name, " ", Employee.last_name))
+    )
+    if sort == "height":
+        q = q.sort(Employee.height.desc())
+    elif sort == "name":
+        q = q.sort(Employee.last_name.asc())
+    q = q.limit(limit)
+    async for result in Employee.esql_execute(q, return_additional=True):
+        assert type(result) == tuple
+        employee = result[0]
+        full_name = result[1]["full_name"]
+        print(
+            f"{full_name:<20}",
+            f"{'Hired' if employee.still_hired else 'Not hired':<10}",
+            f"{employee.height:5.2f}m",
+            f"{employee.address.city:<20}",
+        )
+
+
+def parse_args() -> argparse.Namespace:
+    parser = argparse.ArgumentParser(description="Employee ES|QL example")
+    parser.add_argument(
+        "--recreate-index",
+        "-r",
+        action="store_true",
+        help="Recreate and populate the index",
+    )
+    parser.add_argument(
+        "--limit",
+        action="store",
+        type=int,
+        default=100,
+        help="Maximum number or employees to return (default: 100)",
+    )
+    parser.add_argument(
+        "--sort",
+        action="store",
+        type=str,
+        default=None,
+        help='Sort by "name" (ascending) or by "height" (descending) (default: no sorting)',
+    )
+    parser.add_argument(
+        "query", action="store", help="The name or partial name to search for"
+    )
+    return parser.parse_args()
+
+
+async def main() -> None:
+    args = parse_args()
+
+    # initiate the default connection to elasticsearch
+    async_connections.create_connection(hosts=[os.environ["ELASTICSEARCH_URL"]])
+
+    if args.recreate_index or not await Employee._index.exists():
+        await create()
+    await Employee.init()
+
+    await search(args.query, args.limit, args.sort)
+
+    # close the connection
+    await async_connections.get_connection().close()
+
+
+if __name__ == "__main__":
+    asyncio.run(main())
diff -pruN 9.1.0-3/examples/dsl/esql_employees.py 9.1.1-1/examples/dsl/esql_employees.py
--- 9.1.0-3/examples/dsl/esql_employees.py	1970-01-01 00:00:00.000000000 +0000
+++ 9.1.1-1/examples/dsl/esql_employees.py	2025-09-12 13:23:45.000000000 +0000
@@ -0,0 +1,169 @@
+#  Licensed to Elasticsearch B.V. under one or more contributor
+#  license agreements. See the NOTICE file distributed with
+#  this work for additional information regarding copyright
+#  ownership. Elasticsearch B.V. licenses this file to you under
+#  the Apache License, Version 2.0 (the "License"); you may
+#  not use this file except in compliance with the License.
+#  You may obtain a copy of the License at
+#
+# 	http://www.apache.org/licenses/LICENSE-2.0
+#
+#  Unless required by applicable law or agreed to in writing,
+#  software distributed under the License is distributed on an
+#  "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+#  KIND, either express or implied.  See the License for the
+#  specific language governing permissions and limitations
+#  under the License.
+
+"""
+# ES|QL query builder example
+
+Requirements:
+
+$ pip install elasticsearch faker
+
+To run the example:
+
+$ python esql_employees.py "name to search"
+
+The index will be created automatically with a list of 1000 randomly generated
+employees if it does not exist. Add `--recreate-index` or `-r` to the command
+to regenerate it.
+
+Examples:
+
+$ python esql_employees "Mark"  # employees named Mark (first or last names)
+$ python esql_employees "Sarah" --limit 10  # up to 10 employees named Sarah
+$ python esql_employees "Sam" --sort height  # sort results by height
+$ python esql_employees "Sam" --sort name  # sort results by last name
+"""
+
+import argparse
+import os
+import random
+
+from faker import Faker
+
+from elasticsearch.dsl import Document, InnerDoc, M, connections
+from elasticsearch.esql import ESQLBase
+from elasticsearch.esql.functions import concat, multi_match
+
+fake = Faker()
+
+
+class Address(InnerDoc):
+    address: M[str]
+    city: M[str]
+    zip_code: M[str]
+
+
+class Employee(Document):
+    emp_no: M[int]
+    first_name: M[str]
+    last_name: M[str]
+    height: M[float]
+    still_hired: M[bool]
+    address: M[Address]
+
+    class Index:
+        name = "employees"
+
+    @property
+    def name(self) -> str:
+        return f"{self.first_name} {self.last_name}"
+
+    def __repr__(self) -> str:
+        return f"<Employee[{self.meta.id}]: {self.first_name} {self.last_name}>"
+
+
+def create(num_employees: int = 1000) -> None:
+    print("Creating a new employee index...")
+    if Employee._index.exists():
+        Employee._index.delete()
+    Employee.init()
+
+    for i in range(num_employees):
+        address = Address(
+            address=fake.address(), city=fake.city(), zip_code=fake.zipcode()
+        )
+        emp = Employee(
+            emp_no=10000 + i,
+            first_name=fake.first_name(),
+            last_name=fake.last_name(),
+            height=int((random.random() * 0.8 + 1.5) * 1000) / 1000,
+            still_hired=random.random() >= 0.5,
+            address=address,
+        )
+        emp.save()
+    Employee._index.refresh()
+
+
+def search(query: str, limit: int, sort: str) -> None:
+    q: ESQLBase = (
+        Employee.esql_from()
+        .where(multi_match(query, Employee.first_name, Employee.last_name))
+        .eval(full_name=concat(Employee.first_name, " ", Employee.last_name))
+    )
+    if sort == "height":
+        q = q.sort(Employee.height.desc())
+    elif sort == "name":
+        q = q.sort(Employee.last_name.asc())
+    q = q.limit(limit)
+    for result in Employee.esql_execute(q, return_additional=True):
+        assert type(result) == tuple
+        employee = result[0]
+        full_name = result[1]["full_name"]
+        print(
+            f"{full_name:<20}",
+            f"{'Hired' if employee.still_hired else 'Not hired':<10}",
+            f"{employee.height:5.2f}m",
+            f"{employee.address.city:<20}",
+        )
+
+
+def parse_args() -> argparse.Namespace:
+    parser = argparse.ArgumentParser(description="Employee ES|QL example")
+    parser.add_argument(
+        "--recreate-index",
+        "-r",
+        action="store_true",
+        help="Recreate and populate the index",
+    )
+    parser.add_argument(
+        "--limit",
+        action="store",
+        type=int,
+        default=100,
+        help="Maximum number or employees to return (default: 100)",
+    )
+    parser.add_argument(
+        "--sort",
+        action="store",
+        type=str,
+        default=None,
+        help='Sort by "name" (ascending) or by "height" (descending) (default: no sorting)',
+    )
+    parser.add_argument(
+        "query", action="store", help="The name or partial name to search for"
+    )
+    return parser.parse_args()
+
+
+def main() -> None:
+    args = parse_args()
+
+    # initiate the default connection to elasticsearch
+    connections.create_connection(hosts=[os.environ["ELASTICSEARCH_URL"]])
+
+    if args.recreate_index or not Employee._index.exists():
+        create()
+    Employee.init()
+
+    search(args.query, args.limit, args.sort)
+
+    # close the connection
+    connections.get_connection().close()
+
+
+if __name__ == "__main__":
+    main()
diff -pruN 9.1.0-3/examples/dsl/semantic_text.py 9.1.1-1/examples/dsl/semantic_text.py
--- 9.1.0-3/examples/dsl/semantic_text.py	2025-07-30 08:51:18.000000000 +0000
+++ 9.1.1-1/examples/dsl/semantic_text.py	2025-09-12 13:23:45.000000000 +0000
@@ -21,7 +21,7 @@
 
 Requirements:
 
-$ pip install "elasticsearch" tqdm
+$ pip install elasticsearch tqdm
 
 Before running this example, an ELSER inference endpoint must be created in the
 Elasticsearch cluster. This can be done manually from Kibana, or with the
diff -pruN 9.1.0-3/examples/dsl/sparse_vectors.py 9.1.1-1/examples/dsl/sparse_vectors.py
--- 9.1.0-3/examples/dsl/sparse_vectors.py	2025-07-30 08:51:18.000000000 +0000
+++ 9.1.1-1/examples/dsl/sparse_vectors.py	2025-09-12 13:23:45.000000000 +0000
@@ -20,7 +20,7 @@
 
 Requirements:
 
-$ pip install nltk tqdm "elasticsearch"
+$ pip install nltk tqdm elasticsearch
 
 Before running this example, the ELSER v2 model must be downloaded and deployed
 to the Elasticsearch cluster, and an ingest pipeline must be defined. This can
diff -pruN 9.1.0-3/examples/dsl/vectors.py 9.1.1-1/examples/dsl/vectors.py
--- 9.1.0-3/examples/dsl/vectors.py	2025-07-30 08:51:18.000000000 +0000
+++ 9.1.1-1/examples/dsl/vectors.py	2025-09-12 13:23:45.000000000 +0000
@@ -20,7 +20,7 @@
 
 Requirements:
 
-$ pip install nltk sentence_transformers tqdm "elasticsearch"
+$ pip install nltk sentence_transformers tqdm elasticsearch
 
 To run the example:
 
diff -pruN 9.1.0-3/pyproject.toml 9.1.1-1/pyproject.toml
--- 9.1.0-3/pyproject.toml	2025-07-30 08:51:18.000000000 +0000
+++ 9.1.1-1/pyproject.toml	2025-09-12 13:23:45.000000000 +0000
@@ -77,8 +77,6 @@ dev = [
     "pandas",
     "mapbox-vector-tile",
     "jinja2",
-    "nltk",
-    "sentence_transformers",
     "tqdm",
     "mypy",
     "pyright",
diff -pruN 9.1.0-3/test_elasticsearch/test_dsl/_async/test_esql.py 9.1.1-1/test_elasticsearch/test_dsl/_async/test_esql.py
--- 9.1.0-3/test_elasticsearch/test_dsl/_async/test_esql.py	2025-07-30 08:51:18.000000000 +0000
+++ 9.1.1-1/test_elasticsearch/test_dsl/_async/test_esql.py	1970-01-01 00:00:00.000000000 +0000
@@ -1,93 +0,0 @@
-#  Licensed to Elasticsearch B.V. under one or more contributor
-#  license agreements. See the NOTICE file distributed with
-#  this work for additional information regarding copyright
-#  ownership. Elasticsearch B.V. licenses this file to you under
-#  the Apache License, Version 2.0 (the "License"); you may
-#  not use this file except in compliance with the License.
-#  You may obtain a copy of the License at
-#
-# 	http://www.apache.org/licenses/LICENSE-2.0
-#
-#  Unless required by applicable law or agreed to in writing,
-#  software distributed under the License is distributed on an
-#  "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-#  KIND, either express or implied.  See the License for the
-#  specific language governing permissions and limitations
-#  under the License.
-
-import pytest
-
-from elasticsearch.dsl import AsyncDocument, M
-from elasticsearch.esql import ESQL, functions
-
-
-class Employee(AsyncDocument):
-    emp_no: M[int]
-    first_name: M[str]
-    last_name: M[str]
-    height: M[float]
-    still_hired: M[bool]
-
-    class Index:
-        name = "employees"
-
-
-async def load_db():
-    data = [
-        [10000, "Joseph", "Wall", 2.2, True],
-        [10001, "Stephanie", "Ward", 1.749, True],
-        [10002, "David", "Keller", 1.872, True],
-        [10003, "Roger", "Hinton", 1.694, False],
-        [10004, "Joshua", "Garcia", 1.661, False],
-        [10005, "Matthew", "Richards", 1.633, False],
-        [10006, "Maria", "Luna", 1.893, True],
-        [10007, "Angela", "Navarro", 1.604, False],
-        [10008, "Maria", "Cannon", 2.079, False],
-        [10009, "Joseph", "Sutton", 2.025, True],
-    ]
-    if await Employee._index.exists():
-        await Employee._index.delete()
-    await Employee.init()
-
-    for e in data:
-        employee = Employee(
-            emp_no=e[0], first_name=e[1], last_name=e[2], height=e[3], still_hired=e[4]
-        )
-        await employee.save()
-    await Employee._index.refresh()
-
-
-@pytest.mark.asyncio
-async def test_esql(async_client):
-    await load_db()
-
-    # get the full names of the employees
-    query = (
-        ESQL.from_(Employee)
-        .eval(name=functions.concat(Employee.first_name, " ", Employee.last_name))
-        .keep("name")
-        .sort("name")
-        .limit(10)
-    )
-    r = await async_client.esql.query(query=str(query))
-    assert r.body["values"] == [
-        ["Angela Navarro"],
-        ["David Keller"],
-        ["Joseph Sutton"],
-        ["Joseph Wall"],
-        ["Joshua Garcia"],
-        ["Maria Cannon"],
-        ["Maria Luna"],
-        ["Matthew Richards"],
-        ["Roger Hinton"],
-        ["Stephanie Ward"],
-    ]
-
-    # get the average height of all hired employees
-    query = ESQL.from_(Employee).stats(
-        avg_height=functions.round(functions.avg(Employee.height), 2).where(
-            Employee.still_hired == True  # noqa: E712
-        )
-    )
-    r = await async_client.esql.query(query=str(query))
-    assert r.body["values"] == [[1.95]]
diff -pruN 9.1.0-3/test_elasticsearch/test_dsl/_sync/test_esql.py 9.1.1-1/test_elasticsearch/test_dsl/_sync/test_esql.py
--- 9.1.0-3/test_elasticsearch/test_dsl/_sync/test_esql.py	2025-07-30 08:51:18.000000000 +0000
+++ 9.1.1-1/test_elasticsearch/test_dsl/_sync/test_esql.py	1970-01-01 00:00:00.000000000 +0000
@@ -1,93 +0,0 @@
-#  Licensed to Elasticsearch B.V. under one or more contributor
-#  license agreements. See the NOTICE file distributed with
-#  this work for additional information regarding copyright
-#  ownership. Elasticsearch B.V. licenses this file to you under
-#  the Apache License, Version 2.0 (the "License"); you may
-#  not use this file except in compliance with the License.
-#  You may obtain a copy of the License at
-#
-# 	http://www.apache.org/licenses/LICENSE-2.0
-#
-#  Unless required by applicable law or agreed to in writing,
-#  software distributed under the License is distributed on an
-#  "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-#  KIND, either express or implied.  See the License for the
-#  specific language governing permissions and limitations
-#  under the License.
-
-import pytest
-
-from elasticsearch.dsl import Document, M
-from elasticsearch.esql import ESQL, functions
-
-
-class Employee(Document):
-    emp_no: M[int]
-    first_name: M[str]
-    last_name: M[str]
-    height: M[float]
-    still_hired: M[bool]
-
-    class Index:
-        name = "employees"
-
-
-def load_db():
-    data = [
-        [10000, "Joseph", "Wall", 2.2, True],
-        [10001, "Stephanie", "Ward", 1.749, True],
-        [10002, "David", "Keller", 1.872, True],
-        [10003, "Roger", "Hinton", 1.694, False],
-        [10004, "Joshua", "Garcia", 1.661, False],
-        [10005, "Matthew", "Richards", 1.633, False],
-        [10006, "Maria", "Luna", 1.893, True],
-        [10007, "Angela", "Navarro", 1.604, False],
-        [10008, "Maria", "Cannon", 2.079, False],
-        [10009, "Joseph", "Sutton", 2.025, True],
-    ]
-    if Employee._index.exists():
-        Employee._index.delete()
-    Employee.init()
-
-    for e in data:
-        employee = Employee(
-            emp_no=e[0], first_name=e[1], last_name=e[2], height=e[3], still_hired=e[4]
-        )
-        employee.save()
-    Employee._index.refresh()
-
-
-@pytest.mark.sync
-def test_esql(client):
-    load_db()
-
-    # get the full names of the employees
-    query = (
-        ESQL.from_(Employee)
-        .eval(name=functions.concat(Employee.first_name, " ", Employee.last_name))
-        .keep("name")
-        .sort("name")
-        .limit(10)
-    )
-    r = client.esql.query(query=str(query))
-    assert r.body["values"] == [
-        ["Angela Navarro"],
-        ["David Keller"],
-        ["Joseph Sutton"],
-        ["Joseph Wall"],
-        ["Joshua Garcia"],
-        ["Maria Cannon"],
-        ["Maria Luna"],
-        ["Matthew Richards"],
-        ["Roger Hinton"],
-        ["Stephanie Ward"],
-    ]
-
-    # get the average height of all hired employees
-    query = ESQL.from_(Employee).stats(
-        avg_height=functions.round(functions.avg(Employee.height), 2).where(
-            Employee.still_hired == True  # noqa: E712
-        )
-    )
-    r = client.esql.query(query=str(query))
-    assert r.body["values"] == [[1.95]]
diff -pruN 9.1.0-3/test_elasticsearch/test_dsl/test_integration/_async/test_document.py 9.1.1-1/test_elasticsearch/test_dsl/test_integration/_async/test_document.py
--- 9.1.0-3/test_elasticsearch/test_dsl/test_integration/_async/test_document.py	2025-07-30 08:51:18.000000000 +0000
+++ 9.1.1-1/test_elasticsearch/test_dsl/test_integration/_async/test_document.py	2025-09-12 13:23:45.000000000 +0000
@@ -630,7 +630,9 @@ async def test_can_save_to_different_ind
 async def test_save_without_skip_empty_will_include_empty_fields(
     async_write_client: AsyncElasticsearch,
 ) -> None:
-    test_repo = Repository(field_1=[], field_2=None, field_3={}, meta={"id": 42})
+    test_repo = Repository(
+        field_1=[], field_2=None, field_3={}, owner={"name": None}, meta={"id": 42}
+    )
     assert await test_repo.save(index="test-document", skip_empty=False)
 
     assert_doc_equals(
@@ -638,7 +640,12 @@ async def test_save_without_skip_empty_w
             "found": True,
             "_index": "test-document",
             "_id": "42",
-            "_source": {"field_1": [], "field_2": None, "field_3": {}},
+            "_source": {
+                "field_1": [],
+                "field_2": None,
+                "field_3": {},
+                "owner": {"name": None},
+            },
         },
         await async_write_client.get(index="test-document", id=42),
     )
diff -pruN 9.1.0-3/test_elasticsearch/test_dsl/test_integration/_async/test_esql.py 9.1.1-1/test_elasticsearch/test_dsl/test_integration/_async/test_esql.py
--- 9.1.0-3/test_elasticsearch/test_dsl/test_integration/_async/test_esql.py	1970-01-01 00:00:00.000000000 +0000
+++ 9.1.1-1/test_elasticsearch/test_dsl/test_integration/_async/test_esql.py	2025-09-12 13:23:45.000000000 +0000
@@ -0,0 +1,254 @@
+#  Licensed to Elasticsearch B.V. under one or more contributor
+#  license agreements. See the NOTICE file distributed with
+#  this work for additional information regarding copyright
+#  ownership. Elasticsearch B.V. licenses this file to you under
+#  the Apache License, Version 2.0 (the "License"); you may
+#  not use this file except in compliance with the License.
+#  You may obtain a copy of the License at
+#
+# 	http://www.apache.org/licenses/LICENSE-2.0
+#
+#  Unless required by applicable law or agreed to in writing,
+#  software distributed under the License is distributed on an
+#  "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+#  KIND, either express or implied.  See the License for the
+#  specific language governing permissions and limitations
+#  under the License.
+
+import pytest
+
+from elasticsearch.dsl import AsyncDocument, InnerDoc, M
+from elasticsearch.esql import ESQL, E, functions
+
+
+class Address(InnerDoc):
+    address: M[str]
+    city: M[str]
+
+
+class Employee(AsyncDocument):
+    emp_no: M[int]
+    first_name: M[str]
+    last_name: M[str]
+    height: M[float]
+    still_hired: M[bool]
+    address: M[Address]
+
+    class Index:
+        name = "employees"
+
+
+async def load_db():
+    data = [
+        [
+            10000,
+            "Joseph",
+            "Wall",
+            2.2,
+            True,
+            Address(address="8875 Long Shoals Suite 441", city="Marcville, TX"),
+        ],
+        [
+            10001,
+            "Stephanie",
+            "Ward",
+            1.749,
+            True,
+            Address(address="90162 Carter Harbor Suite 099", city="Davisborough, DE"),
+        ],
+        [
+            10002,
+            "David",
+            "Keller",
+            1.872,
+            True,
+            Address(address="6697 Patrick Union Suite 797", city="Fuentesmouth, SD"),
+        ],
+        [
+            10003,
+            "Roger",
+            "Hinton",
+            1.694,
+            False,
+            Address(address="809 Kelly Mountains", city="South Megan, DE"),
+        ],
+        [
+            10004,
+            "Joshua",
+            "Garcia",
+            1.661,
+            False,
+            Address(address="718 Angela Forks", city="Port Erinland, MA"),
+        ],
+        [
+            10005,
+            "Matthew",
+            "Richards",
+            1.633,
+            False,
+            Address(address="2869 Brown Mountains", city="New Debra, NH"),
+        ],
+        [
+            10006,
+            "Maria",
+            "Luna",
+            1.893,
+            True,
+            Address(address="5861 Morgan Springs", city="Lake Daniel, WI"),
+        ],
+        [
+            10007,
+            "Angela",
+            "Navarro",
+            1.604,
+            False,
+            Address(address="2848 Allen Station", city="Saint Joseph, OR"),
+        ],
+        [
+            10008,
+            "Maria",
+            "Cannon",
+            2.079,
+            False,
+            Address(address="322 NW Johnston", city="Bakerburgh, MP"),
+        ],
+        [
+            10009,
+            "Joseph",
+            "Sutton",
+            2.025,
+            True,
+            Address(address="77 Cardinal E", city="Lakestown, IL"),
+        ],
+    ]
+    if await Employee._index.exists():
+        await Employee._index.delete()
+    await Employee.init()
+
+    for e in data:
+        employee = Employee(
+            emp_no=e[0],
+            first_name=e[1],
+            last_name=e[2],
+            height=e[3],
+            still_hired=e[4],
+            address=e[5],
+        )
+        await employee.save()
+    await Employee._index.refresh()
+
+
+@pytest.mark.asyncio
+async def test_esql(async_client):
+    await load_db()
+
+    # get the full names of the employees
+    query = (
+        ESQL.from_(Employee)
+        .eval(full_name=functions.concat(Employee.first_name, " ", Employee.last_name))
+        .keep("full_name")
+        .sort("full_name")
+        .limit(10)
+    )
+    r = await async_client.esql.query(query=str(query))
+    assert r.body["values"] == [
+        ["Angela Navarro"],
+        ["David Keller"],
+        ["Joseph Sutton"],
+        ["Joseph Wall"],
+        ["Joshua Garcia"],
+        ["Maria Cannon"],
+        ["Maria Luna"],
+        ["Matthew Richards"],
+        ["Roger Hinton"],
+        ["Stephanie Ward"],
+    ]
+
+    # get the average height of all hired employees
+    query = ESQL.from_(Employee).stats(
+        avg_height=functions.round(functions.avg(Employee.height), 2).where(
+            Employee.still_hired == True  # noqa: E712
+        )
+    )
+    r = await async_client.esql.query(query=str(query))
+    assert r.body["values"] == [[1.95]]
+
+    # find employees by name using a parameter
+    query = (
+        ESQL.from_(Employee)
+        .where(Employee.first_name == E("?"))
+        .keep(Employee.last_name)
+        .sort(Employee.last_name.desc())
+    )
+    r = await async_client.esql.query(query=str(query), params=["Maria"])
+    assert r.body["values"] == [["Luna"], ["Cannon"]]
+
+
+@pytest.mark.asyncio
+async def test_esql_dsl(async_client):
+    await load_db()
+
+    # get employees with first name "Maria"
+    query = (
+        Employee.esql_from()
+        .where(Employee.first_name == "Maria")
+        .sort("last_name")
+        .limit(10)
+    )
+    marias = []
+    async for emp in Employee.esql_execute(query):
+        marias.append(emp)
+    assert len(marias) == 2
+    assert marias[0].last_name == "Cannon"
+    assert marias[0].address.address == "322 NW Johnston"
+    assert marias[0].address.city == "Bakerburgh, MP"
+    assert marias[1].last_name == "Luna"
+    assert marias[1].address.address == "5861 Morgan Springs"
+    assert marias[1].address.city == "Lake Daniel, WI"
+
+    # run a query with a missing field
+    query = (
+        Employee.esql_from()
+        .where(Employee.first_name == "Maria")
+        .drop(Employee.address.city)
+        .sort("last_name")
+        .limit(10)
+    )
+    with pytest.raises(ValueError):
+        await Employee.esql_execute(query).__anext__()
+    marias = []
+    async for emp in Employee.esql_execute(query, ignore_missing_fields=True):
+        marias.append(emp)
+    assert marias[0].last_name == "Cannon"
+    assert marias[0].address.address == "322 NW Johnston"
+    assert marias[0].address.city is None
+    assert marias[1].last_name == "Luna"
+    assert marias[1].address.address == "5861 Morgan Springs"
+    assert marias[1].address.city is None
+
+    # run a query with additional calculated fields
+    query = (
+        Employee.esql_from()
+        .where(Employee.first_name == "Maria")
+        .eval(
+            full_name=functions.concat(Employee.first_name, " ", Employee.last_name),
+            height_cm=functions.to_integer(Employee.height * 100),
+        )
+        .sort("last_name")
+        .limit(10)
+    )
+    assert isinstance(await Employee.esql_execute(query).__anext__(), Employee)
+    assert isinstance(
+        await Employee.esql_execute(query, return_additional=True).__anext__(), tuple
+    )
+    marias = []
+    async for emp, extra in Employee.esql_execute(query, return_additional=True):
+        marias.append([emp, extra])
+    assert marias[0][0].last_name == "Cannon"
+    assert marias[0][0].address.address == "322 NW Johnston"
+    assert marias[0][0].address.city == "Bakerburgh, MP"
+    assert marias[0][1] == {"full_name": "Maria Cannon", "height_cm": 208}
+    assert marias[1][0].last_name == "Luna"
+    assert marias[1][0].address.address == "5861 Morgan Springs"
+    assert marias[1][0].address.city == "Lake Daniel, WI"
+    assert marias[1][1] == {"full_name": "Maria Luna", "height_cm": 189}
diff -pruN 9.1.0-3/test_elasticsearch/test_dsl/test_integration/_sync/test_document.py 9.1.1-1/test_elasticsearch/test_dsl/test_integration/_sync/test_document.py
--- 9.1.0-3/test_elasticsearch/test_dsl/test_integration/_sync/test_document.py	2025-07-30 08:51:18.000000000 +0000
+++ 9.1.1-1/test_elasticsearch/test_dsl/test_integration/_sync/test_document.py	2025-09-12 13:23:45.000000000 +0000
@@ -624,7 +624,9 @@ def test_can_save_to_different_index(
 def test_save_without_skip_empty_will_include_empty_fields(
     write_client: Elasticsearch,
 ) -> None:
-    test_repo = Repository(field_1=[], field_2=None, field_3={}, meta={"id": 42})
+    test_repo = Repository(
+        field_1=[], field_2=None, field_3={}, owner={"name": None}, meta={"id": 42}
+    )
     assert test_repo.save(index="test-document", skip_empty=False)
 
     assert_doc_equals(
@@ -632,7 +634,12 @@ def test_save_without_skip_empty_will_in
             "found": True,
             "_index": "test-document",
             "_id": "42",
-            "_source": {"field_1": [], "field_2": None, "field_3": {}},
+            "_source": {
+                "field_1": [],
+                "field_2": None,
+                "field_3": {},
+                "owner": {"name": None},
+            },
         },
         write_client.get(index="test-document", id=42),
     )
diff -pruN 9.1.0-3/test_elasticsearch/test_dsl/test_integration/_sync/test_esql.py 9.1.1-1/test_elasticsearch/test_dsl/test_integration/_sync/test_esql.py
--- 9.1.0-3/test_elasticsearch/test_dsl/test_integration/_sync/test_esql.py	1970-01-01 00:00:00.000000000 +0000
+++ 9.1.1-1/test_elasticsearch/test_dsl/test_integration/_sync/test_esql.py	2025-09-12 13:23:45.000000000 +0000
@@ -0,0 +1,254 @@
+#  Licensed to Elasticsearch B.V. under one or more contributor
+#  license agreements. See the NOTICE file distributed with
+#  this work for additional information regarding copyright
+#  ownership. Elasticsearch B.V. licenses this file to you under
+#  the Apache License, Version 2.0 (the "License"); you may
+#  not use this file except in compliance with the License.
+#  You may obtain a copy of the License at
+#
+# 	http://www.apache.org/licenses/LICENSE-2.0
+#
+#  Unless required by applicable law or agreed to in writing,
+#  software distributed under the License is distributed on an
+#  "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+#  KIND, either express or implied.  See the License for the
+#  specific language governing permissions and limitations
+#  under the License.
+
+import pytest
+
+from elasticsearch.dsl import Document, InnerDoc, M
+from elasticsearch.esql import ESQL, E, functions
+
+
+class Address(InnerDoc):
+    address: M[str]
+    city: M[str]
+
+
+class Employee(Document):
+    emp_no: M[int]
+    first_name: M[str]
+    last_name: M[str]
+    height: M[float]
+    still_hired: M[bool]
+    address: M[Address]
+
+    class Index:
+        name = "employees"
+
+
+def load_db():
+    data = [
+        [
+            10000,
+            "Joseph",
+            "Wall",
+            2.2,
+            True,
+            Address(address="8875 Long Shoals Suite 441", city="Marcville, TX"),
+        ],
+        [
+            10001,
+            "Stephanie",
+            "Ward",
+            1.749,
+            True,
+            Address(address="90162 Carter Harbor Suite 099", city="Davisborough, DE"),
+        ],
+        [
+            10002,
+            "David",
+            "Keller",
+            1.872,
+            True,
+            Address(address="6697 Patrick Union Suite 797", city="Fuentesmouth, SD"),
+        ],
+        [
+            10003,
+            "Roger",
+            "Hinton",
+            1.694,
+            False,
+            Address(address="809 Kelly Mountains", city="South Megan, DE"),
+        ],
+        [
+            10004,
+            "Joshua",
+            "Garcia",
+            1.661,
+            False,
+            Address(address="718 Angela Forks", city="Port Erinland, MA"),
+        ],
+        [
+            10005,
+            "Matthew",
+            "Richards",
+            1.633,
+            False,
+            Address(address="2869 Brown Mountains", city="New Debra, NH"),
+        ],
+        [
+            10006,
+            "Maria",
+            "Luna",
+            1.893,
+            True,
+            Address(address="5861 Morgan Springs", city="Lake Daniel, WI"),
+        ],
+        [
+            10007,
+            "Angela",
+            "Navarro",
+            1.604,
+            False,
+            Address(address="2848 Allen Station", city="Saint Joseph, OR"),
+        ],
+        [
+            10008,
+            "Maria",
+            "Cannon",
+            2.079,
+            False,
+            Address(address="322 NW Johnston", city="Bakerburgh, MP"),
+        ],
+        [
+            10009,
+            "Joseph",
+            "Sutton",
+            2.025,
+            True,
+            Address(address="77 Cardinal E", city="Lakestown, IL"),
+        ],
+    ]
+    if Employee._index.exists():
+        Employee._index.delete()
+    Employee.init()
+
+    for e in data:
+        employee = Employee(
+            emp_no=e[0],
+            first_name=e[1],
+            last_name=e[2],
+            height=e[3],
+            still_hired=e[4],
+            address=e[5],
+        )
+        employee.save()
+    Employee._index.refresh()
+
+
+@pytest.mark.sync
+def test_esql(client):
+    load_db()
+
+    # get the full names of the employees
+    query = (
+        ESQL.from_(Employee)
+        .eval(full_name=functions.concat(Employee.first_name, " ", Employee.last_name))
+        .keep("full_name")
+        .sort("full_name")
+        .limit(10)
+    )
+    r = client.esql.query(query=str(query))
+    assert r.body["values"] == [
+        ["Angela Navarro"],
+        ["David Keller"],
+        ["Joseph Sutton"],
+        ["Joseph Wall"],
+        ["Joshua Garcia"],
+        ["Maria Cannon"],
+        ["Maria Luna"],
+        ["Matthew Richards"],
+        ["Roger Hinton"],
+        ["Stephanie Ward"],
+    ]
+
+    # get the average height of all hired employees
+    query = ESQL.from_(Employee).stats(
+        avg_height=functions.round(functions.avg(Employee.height), 2).where(
+            Employee.still_hired == True  # noqa: E712
+        )
+    )
+    r = client.esql.query(query=str(query))
+    assert r.body["values"] == [[1.95]]
+
+    # find employees by name using a parameter
+    query = (
+        ESQL.from_(Employee)
+        .where(Employee.first_name == E("?"))
+        .keep(Employee.last_name)
+        .sort(Employee.last_name.desc())
+    )
+    r = client.esql.query(query=str(query), params=["Maria"])
+    assert r.body["values"] == [["Luna"], ["Cannon"]]
+
+
+@pytest.mark.sync
+def test_esql_dsl(client):
+    load_db()
+
+    # get employees with first name "Maria"
+    query = (
+        Employee.esql_from()
+        .where(Employee.first_name == "Maria")
+        .sort("last_name")
+        .limit(10)
+    )
+    marias = []
+    for emp in Employee.esql_execute(query):
+        marias.append(emp)
+    assert len(marias) == 2
+    assert marias[0].last_name == "Cannon"
+    assert marias[0].address.address == "322 NW Johnston"
+    assert marias[0].address.city == "Bakerburgh, MP"
+    assert marias[1].last_name == "Luna"
+    assert marias[1].address.address == "5861 Morgan Springs"
+    assert marias[1].address.city == "Lake Daniel, WI"
+
+    # run a query with a missing field
+    query = (
+        Employee.esql_from()
+        .where(Employee.first_name == "Maria")
+        .drop(Employee.address.city)
+        .sort("last_name")
+        .limit(10)
+    )
+    with pytest.raises(ValueError):
+        Employee.esql_execute(query).__next__()
+    marias = []
+    for emp in Employee.esql_execute(query, ignore_missing_fields=True):
+        marias.append(emp)
+    assert marias[0].last_name == "Cannon"
+    assert marias[0].address.address == "322 NW Johnston"
+    assert marias[0].address.city is None
+    assert marias[1].last_name == "Luna"
+    assert marias[1].address.address == "5861 Morgan Springs"
+    assert marias[1].address.city is None
+
+    # run a query with additional calculated fields
+    query = (
+        Employee.esql_from()
+        .where(Employee.first_name == "Maria")
+        .eval(
+            full_name=functions.concat(Employee.first_name, " ", Employee.last_name),
+            height_cm=functions.to_integer(Employee.height * 100),
+        )
+        .sort("last_name")
+        .limit(10)
+    )
+    assert isinstance(Employee.esql_execute(query).__next__(), Employee)
+    assert isinstance(
+        Employee.esql_execute(query, return_additional=True).__next__(), tuple
+    )
+    marias = []
+    for emp, extra in Employee.esql_execute(query, return_additional=True):
+        marias.append([emp, extra])
+    assert marias[0][0].last_name == "Cannon"
+    assert marias[0][0].address.address == "322 NW Johnston"
+    assert marias[0][0].address.city == "Bakerburgh, MP"
+    assert marias[0][1] == {"full_name": "Maria Cannon", "height_cm": 208}
+    assert marias[1][0].last_name == "Luna"
+    assert marias[1][0].address.address == "5861 Morgan Springs"
+    assert marias[1][0].address.city == "Lake Daniel, WI"
+    assert marias[1][1] == {"full_name": "Maria Luna", "height_cm": 189}
diff -pruN 9.1.0-3/test_elasticsearch/test_dsl/test_integration/test_examples/_async/test_vectors.py 9.1.1-1/test_elasticsearch/test_dsl/test_integration/test_examples/_async/test_vectors.py
--- 9.1.0-3/test_elasticsearch/test_dsl/test_integration/test_examples/_async/test_vectors.py	2025-07-30 08:51:18.000000000 +0000
+++ 9.1.1-1/test_elasticsearch/test_dsl/test_integration/test_examples/_async/test_vectors.py	2025-09-12 13:23:45.000000000 +0000
@@ -15,27 +15,27 @@
 #  specific language governing permissions and limitations
 #  under the License.
 
+import sys
 from hashlib import md5
 from typing import Any, List, Tuple
 from unittest import SkipTest
+from unittest.mock import Mock, patch
 
 import pytest
 
 from elasticsearch import AsyncElasticsearch
 
-from ..async_examples import vectors
-
 
 @pytest.mark.asyncio
 async def test_vector_search(
-    async_write_client: AsyncElasticsearch, es_version: Tuple[int, ...], mocker: Any
+    async_write_client: AsyncElasticsearch, es_version: Tuple[int, ...]
 ) -> None:
     # this test only runs on Elasticsearch >= 8.11 because the example uses
     # a dense vector without specifying an explicit size
     if es_version < (8, 11):
         raise SkipTest("This test requires Elasticsearch 8.11 or newer")
 
-    class MockModel:
+    class MockSentenceTransformer:
         def __init__(self, model: Any):
             pass
 
@@ -44,9 +44,22 @@ async def test_vector_search(
             total = sum(vector)
             return [float(v) / total for v in vector]
 
-    mocker.patch.object(vectors, "SentenceTransformer", new=MockModel)
+    def mock_nltk_tokenize(content: str):
+        return content.split("\n")
 
-    await vectors.create()
-    await vectors.WorkplaceDoc._index.refresh()
-    results = await (await vectors.search("Welcome to our team!")).execute()
-    assert results[0].name == "New Employee Onboarding Guide"
+    # mock sentence_transformers and nltk, because they are quite big and
+    # irrelevant for testing the example logic
+    with patch.dict(
+        sys.modules,
+        {
+            "sentence_transformers": Mock(SentenceTransformer=MockSentenceTransformer),
+            "nltk": Mock(sent_tokenize=mock_nltk_tokenize),
+        },
+    ):
+        # import the example after the dependencies are mocked
+        from ..async_examples import vectors
+
+        await vectors.create()
+        await vectors.WorkplaceDoc._index.refresh()
+        results = await (await vectors.search("Welcome to our team!")).execute()
+        assert results[0].name == "Intellectual Property Policy"
diff -pruN 9.1.0-3/test_elasticsearch/test_dsl/test_integration/test_examples/_sync/test_vectors.py 9.1.1-1/test_elasticsearch/test_dsl/test_integration/test_examples/_sync/test_vectors.py
--- 9.1.0-3/test_elasticsearch/test_dsl/test_integration/test_examples/_sync/test_vectors.py	2025-07-30 08:51:18.000000000 +0000
+++ 9.1.1-1/test_elasticsearch/test_dsl/test_integration/test_examples/_sync/test_vectors.py	2025-09-12 13:23:45.000000000 +0000
@@ -15,27 +15,27 @@
 #  specific language governing permissions and limitations
 #  under the License.
 
+import sys
 from hashlib import md5
 from typing import Any, List, Tuple
 from unittest import SkipTest
+from unittest.mock import Mock, patch
 
 import pytest
 
 from elasticsearch import Elasticsearch
 
-from ..examples import vectors
-
 
 @pytest.mark.sync
 def test_vector_search(
-    write_client: Elasticsearch, es_version: Tuple[int, ...], mocker: Any
+    write_client: Elasticsearch, es_version: Tuple[int, ...]
 ) -> None:
     # this test only runs on Elasticsearch >= 8.11 because the example uses
     # a dense vector without specifying an explicit size
     if es_version < (8, 11):
         raise SkipTest("This test requires Elasticsearch 8.11 or newer")
 
-    class MockModel:
+    class MockSentenceTransformer:
         def __init__(self, model: Any):
             pass
 
@@ -44,9 +44,22 @@ def test_vector_search(
             total = sum(vector)
             return [float(v) / total for v in vector]
 
-    mocker.patch.object(vectors, "SentenceTransformer", new=MockModel)
+    def mock_nltk_tokenize(content: str):
+        return content.split("\n")
 
-    vectors.create()
-    vectors.WorkplaceDoc._index.refresh()
-    results = (vectors.search("Welcome to our team!")).execute()
-    assert results[0].name == "New Employee Onboarding Guide"
+    # mock sentence_transformers and nltk, because they are quite big and
+    # irrelevant for testing the example logic
+    with patch.dict(
+        sys.modules,
+        {
+            "sentence_transformers": Mock(SentenceTransformer=MockSentenceTransformer),
+            "nltk": Mock(sent_tokenize=mock_nltk_tokenize),
+        },
+    ):
+        # import the example after the dependencies are mocked
+        from ..examples import vectors
+
+        vectors.create()
+        vectors.WorkplaceDoc._index.refresh()
+        results = (vectors.search("Welcome to our team!")).execute()
+        assert results[0].name == "Intellectual Property Policy"
diff -pruN 9.1.0-3/test_elasticsearch/test_dsl/test_integration/test_examples/async_examples/esql_employees.py 9.1.1-1/test_elasticsearch/test_dsl/test_integration/test_examples/async_examples/esql_employees.py
--- 9.1.0-3/test_elasticsearch/test_dsl/test_integration/test_examples/async_examples/esql_employees.py	1970-01-01 00:00:00.000000000 +0000
+++ 9.1.1-1/test_elasticsearch/test_dsl/test_integration/test_examples/async_examples/esql_employees.py	2025-09-12 13:23:45.000000000 +0000
@@ -0,0 +1,170 @@
+#  Licensed to Elasticsearch B.V. under one or more contributor
+#  license agreements. See the NOTICE file distributed with
+#  this work for additional information regarding copyright
+#  ownership. Elasticsearch B.V. licenses this file to you under
+#  the Apache License, Version 2.0 (the "License"); you may
+#  not use this file except in compliance with the License.
+#  You may obtain a copy of the License at
+#
+# 	http://www.apache.org/licenses/LICENSE-2.0
+#
+#  Unless required by applicable law or agreed to in writing,
+#  software distributed under the License is distributed on an
+#  "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+#  KIND, either express or implied.  See the License for the
+#  specific language governing permissions and limitations
+#  under the License.
+
+"""
+# ES|QL query builder example
+
+Requirements:
+
+$ pip install "elasticsearch[async]" faker
+
+To run the example:
+
+$ python esql_employees.py "name to search"
+
+The index will be created automatically with a list of 1000 randomly generated
+employees if it does not exist. Add `--recreate-index` or `-r` to the command
+to regenerate it.
+
+Examples:
+
+$ python esql_employees "Mark"  # employees named Mark (first or last names)
+$ python esql_employees "Sarah" --limit 10  # up to 10 employees named Sarah
+$ python esql_employees "Sam" --sort height  # sort results by height
+$ python esql_employees "Sam" --sort name  # sort results by last name
+"""
+
+import argparse
+import asyncio
+import os
+import random
+
+from faker import Faker
+
+from elasticsearch.dsl import AsyncDocument, InnerDoc, M, async_connections
+from elasticsearch.esql import ESQLBase
+from elasticsearch.esql.functions import concat, multi_match
+
+fake = Faker()
+
+
+class Address(InnerDoc):
+    address: M[str]
+    city: M[str]
+    zip_code: M[str]
+
+
+class Employee(AsyncDocument):
+    emp_no: M[int]
+    first_name: M[str]
+    last_name: M[str]
+    height: M[float]
+    still_hired: M[bool]
+    address: M[Address]
+
+    class Index:
+        name = "employees"
+
+    @property
+    def name(self) -> str:
+        return f"{self.first_name} {self.last_name}"
+
+    def __repr__(self) -> str:
+        return f"<Employee[{self.meta.id}]: {self.first_name} {self.last_name}>"
+
+
+async def create(num_employees: int = 1000) -> None:
+    print("Creating a new employee index...")
+    if await Employee._index.exists():
+        await Employee._index.delete()
+    await Employee.init()
+
+    for i in range(num_employees):
+        address = Address(
+            address=fake.address(), city=fake.city(), zip_code=fake.zipcode()
+        )
+        emp = Employee(
+            emp_no=10000 + i,
+            first_name=fake.first_name(),
+            last_name=fake.last_name(),
+            height=int((random.random() * 0.8 + 1.5) * 1000) / 1000,
+            still_hired=random.random() >= 0.5,
+            address=address,
+        )
+        await emp.save()
+    await Employee._index.refresh()
+
+
+async def search(query: str, limit: int, sort: str) -> None:
+    q: ESQLBase = (
+        Employee.esql_from()
+        .where(multi_match(query, Employee.first_name, Employee.last_name))
+        .eval(full_name=concat(Employee.first_name, " ", Employee.last_name))
+    )
+    if sort == "height":
+        q = q.sort(Employee.height.desc())
+    elif sort == "name":
+        q = q.sort(Employee.last_name.asc())
+    q = q.limit(limit)
+    async for result in Employee.esql_execute(q, return_additional=True):
+        assert type(result) == tuple
+        employee = result[0]
+        full_name = result[1]["full_name"]
+        print(
+            f"{full_name:<20}",
+            f"{'Hired' if employee.still_hired else 'Not hired':<10}",
+            f"{employee.height:5.2f}m",
+            f"{employee.address.city:<20}",
+        )
+
+
+def parse_args() -> argparse.Namespace:
+    parser = argparse.ArgumentParser(description="Employee ES|QL example")
+    parser.add_argument(
+        "--recreate-index",
+        "-r",
+        action="store_true",
+        help="Recreate and populate the index",
+    )
+    parser.add_argument(
+        "--limit",
+        action="store",
+        type=int,
+        default=100,
+        help="Maximum number or employees to return (default: 100)",
+    )
+    parser.add_argument(
+        "--sort",
+        action="store",
+        type=str,
+        default=None,
+        help='Sort by "name" (ascending) or by "height" (descending) (default: no sorting)',
+    )
+    parser.add_argument(
+        "query", action="store", help="The name or partial name to search for"
+    )
+    return parser.parse_args()
+
+
+async def main() -> None:
+    args = parse_args()
+
+    # initiate the default connection to elasticsearch
+    async_connections.create_connection(hosts=[os.environ["ELASTICSEARCH_URL"]])
+
+    if args.recreate_index or not await Employee._index.exists():
+        await create()
+    await Employee.init()
+
+    await search(args.query, args.limit, args.sort)
+
+    # close the connection
+    await async_connections.get_connection().close()
+
+
+if __name__ == "__main__":
+    asyncio.run(main())
diff -pruN 9.1.0-3/test_elasticsearch/test_dsl/test_integration/test_examples/examples/async/esql_employees.py 9.1.1-1/test_elasticsearch/test_dsl/test_integration/test_examples/examples/async/esql_employees.py
--- 9.1.0-3/test_elasticsearch/test_dsl/test_integration/test_examples/examples/async/esql_employees.py	1970-01-01 00:00:00.000000000 +0000
+++ 9.1.1-1/test_elasticsearch/test_dsl/test_integration/test_examples/examples/async/esql_employees.py	2025-09-12 13:23:45.000000000 +0000
@@ -0,0 +1,170 @@
+#  Licensed to Elasticsearch B.V. under one or more contributor
+#  license agreements. See the NOTICE file distributed with
+#  this work for additional information regarding copyright
+#  ownership. Elasticsearch B.V. licenses this file to you under
+#  the Apache License, Version 2.0 (the "License"); you may
+#  not use this file except in compliance with the License.
+#  You may obtain a copy of the License at
+#
+# 	http://www.apache.org/licenses/LICENSE-2.0
+#
+#  Unless required by applicable law or agreed to in writing,
+#  software distributed under the License is distributed on an
+#  "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+#  KIND, either express or implied.  See the License for the
+#  specific language governing permissions and limitations
+#  under the License.
+
+"""
+# ES|QL query builder example
+
+Requirements:
+
+$ pip install "elasticsearch[async]" faker
+
+To run the example:
+
+$ python esql_employees.py "name to search"
+
+The index will be created automatically with a list of 1000 randomly generated
+employees if it does not exist. Add `--recreate-index` or `-r` to the command
+to regenerate it.
+
+Examples:
+
+$ python esql_employees "Mark"  # employees named Mark (first or last names)
+$ python esql_employees "Sarah" --limit 10  # up to 10 employees named Sarah
+$ python esql_employees "Sam" --sort height  # sort results by height
+$ python esql_employees "Sam" --sort name  # sort results by last name
+"""
+
+import argparse
+import asyncio
+import os
+import random
+
+from faker import Faker
+
+from elasticsearch.dsl import AsyncDocument, InnerDoc, M, async_connections
+from elasticsearch.esql import ESQLBase
+from elasticsearch.esql.functions import concat, multi_match
+
+fake = Faker()
+
+
+class Address(InnerDoc):
+    address: M[str]
+    city: M[str]
+    zip_code: M[str]
+
+
+class Employee(AsyncDocument):
+    emp_no: M[int]
+    first_name: M[str]
+    last_name: M[str]
+    height: M[float]
+    still_hired: M[bool]
+    address: M[Address]
+
+    class Index:
+        name = "employees"
+
+    @property
+    def name(self) -> str:
+        return f"{self.first_name} {self.last_name}"
+
+    def __repr__(self) -> str:
+        return f"<Employee[{self.meta.id}]: {self.first_name} {self.last_name}>"
+
+
+async def create(num_employees: int = 1000) -> None:
+    print("Creating a new employee index...")
+    if await Employee._index.exists():
+        await Employee._index.delete()
+    await Employee.init()
+
+    for i in range(num_employees):
+        address = Address(
+            address=fake.address(), city=fake.city(), zip_code=fake.zipcode()
+        )
+        emp = Employee(
+            emp_no=10000 + i,
+            first_name=fake.first_name(),
+            last_name=fake.last_name(),
+            height=int((random.random() * 0.8 + 1.5) * 1000) / 1000,
+            still_hired=random.random() >= 0.5,
+            address=address,
+        )
+        await emp.save()
+    await Employee._index.refresh()
+
+
+async def search(query: str, limit: int, sort: str) -> None:
+    q: ESQLBase = (
+        Employee.esql_from()
+        .where(multi_match(query, Employee.first_name, Employee.last_name))
+        .eval(full_name=concat(Employee.first_name, " ", Employee.last_name))
+    )
+    if sort == "height":
+        q = q.sort(Employee.height.desc())
+    elif sort == "name":
+        q = q.sort(Employee.last_name.asc())
+    q = q.limit(limit)
+    async for result in Employee.esql_execute(q, return_additional=True):
+        assert type(result) == tuple
+        employee = result[0]
+        full_name = result[1]["full_name"]
+        print(
+            f"{full_name:<20}",
+            f"{'Hired' if employee.still_hired else 'Not hired':<10}",
+            f"{employee.height:5.2f}m",
+            f"{employee.address.city:<20}",
+        )
+
+
+def parse_args() -> argparse.Namespace:
+    parser = argparse.ArgumentParser(description="Employee ES|QL example")
+    parser.add_argument(
+        "--recreate-index",
+        "-r",
+        action="store_true",
+        help="Recreate and populate the index",
+    )
+    parser.add_argument(
+        "--limit",
+        action="store",
+        type=int,
+        default=100,
+        help="Maximum number or employees to return (default: 100)",
+    )
+    parser.add_argument(
+        "--sort",
+        action="store",
+        type=str,
+        default=None,
+        help='Sort by "name" (ascending) or by "height" (descending) (default: no sorting)',
+    )
+    parser.add_argument(
+        "query", action="store", help="The name or partial name to search for"
+    )
+    return parser.parse_args()
+
+
+async def main() -> None:
+    args = parse_args()
+
+    # initiate the default connection to elasticsearch
+    async_connections.create_connection(hosts=[os.environ["ELASTICSEARCH_URL"]])
+
+    if args.recreate_index or not await Employee._index.exists():
+        await create()
+    await Employee.init()
+
+    await search(args.query, args.limit, args.sort)
+
+    # close the connection
+    await async_connections.get_connection().close()
+
+
+if __name__ == "__main__":
+    asyncio.run(main())
diff -pruN 9.1.0-3/test_elasticsearch/test_dsl/test_integration/test_examples/examples/esql_employees.py 9.1.1-1/test_elasticsearch/test_dsl/test_integration/test_examples/examples/esql_employees.py
--- 9.1.0-3/test_elasticsearch/test_dsl/test_integration/test_examples/examples/esql_employees.py	1970-01-01 00:00:00.000000000 +0000
+++ 9.1.1-1/test_elasticsearch/test_dsl/test_integration/test_examples/examples/esql_employees.py	2025-09-12 13:23:45.000000000 +0000
@@ -0,0 +1,169 @@
+#  Licensed to Elasticsearch B.V. under one or more contributor
+#  license agreements. See the NOTICE file distributed with
+#  this work for additional information regarding copyright
+#  ownership. Elasticsearch B.V. licenses this file to you under
+#  the Apache License, Version 2.0 (the "License"); you may
+#  not use this file except in compliance with the License.
+#  You may obtain a copy of the License at
+#
+# 	http://www.apache.org/licenses/LICENSE-2.0
+#
+#  Unless required by applicable law or agreed to in writing,
+#  software distributed under the License is distributed on an
+#  "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+#  KIND, either express or implied.  See the License for the
+#  specific language governing permissions and limitations
+#  under the License.
+
+"""
+# ES|QL query builder example
+
+Requirements:
+
+$ pip install elasticsearch faker
+
+To run the example:
+
+$ python esql_employees.py "name to search"
+
+The index will be created automatically with a list of 1000 randomly generated
+employees if it does not exist. Add `--recreate-index` or `-r` to the command
+to regenerate it.
+
+Examples:
+
+$ python esql_employees "Mark"  # employees named Mark (first or last names)
+$ python esql_employees "Sarah" --limit 10  # up to 10 employees named Sarah
+$ python esql_employees "Sam" --sort height  # sort results by height
+$ python esql_employees "Sam" --sort name  # sort results by last name
+"""
+
+import argparse
+import os
+import random
+
+from faker import Faker
+
+from elasticsearch.dsl import Document, InnerDoc, M, connections
+from elasticsearch.esql import ESQLBase
+from elasticsearch.esql.functions import concat, multi_match
+
+fake = Faker()
+
+
+class Address(InnerDoc):
+    address: M[str]
+    city: M[str]
+    zip_code: M[str]
+
+
+class Employee(Document):
+    emp_no: M[int]
+    first_name: M[str]
+    last_name: M[str]
+    height: M[float]
+    still_hired: M[bool]
+    address: M[Address]
+
+    class Index:
+        name = "employees"
+
+    @property
+    def name(self) -> str:
+        return f"{self.first_name} {self.last_name}"
+
+    def __repr__(self) -> str:
+        return f"<Employee[{self.meta.id}]: {self.first_name} {self.last_name}>"
+
+
+def create(num_employees: int = 1000) -> None:
+    print("Creating a new employee index...")
+    if Employee._index.exists():
+        Employee._index.delete()
+    Employee.init()
+
+    for i in range(num_employees):
+        address = Address(
+            address=fake.address(), city=fake.city(), zip_code=fake.zipcode()
+        )
+        emp = Employee(
+            emp_no=10000 + i,
+            first_name=fake.first_name(),
+            last_name=fake.last_name(),
+            height=int((random.random() * 0.8 + 1.5) * 1000) / 1000,
+            still_hired=random.random() >= 0.5,
+            address=address,
+        )
+        emp.save()
+    Employee._index.refresh()
+
+
+def search(query: str, limit: int, sort: str) -> None:
+    q: ESQLBase = (
+        Employee.esql_from()
+        .where(multi_match(query, Employee.first_name, Employee.last_name))
+        .eval(full_name=concat(Employee.first_name, " ", Employee.last_name))
+    )
+    if sort == "height":
+        q = q.sort(Employee.height.desc())
+    elif sort == "name":
+        q = q.sort(Employee.last_name.asc())
+    q = q.limit(limit)
+    for result in Employee.esql_execute(q, return_additional=True):
+        assert type(result) == tuple
+        employee = result[0]
+        full_name = result[1]["full_name"]
+        print(
+            f"{full_name:<20}",
+            f"{'Hired' if employee.still_hired else 'Not hired':<10}",
+            f"{employee.height:5.2f}m",
+            f"{employee.address.city:<20}",
+        )
+
+
+def parse_args() -> argparse.Namespace:
+    parser = argparse.ArgumentParser(description="Employee ES|QL example")
+    parser.add_argument(
+        "--recreate-index",
+        "-r",
+        action="store_true",
+        help="Recreate and populate the index",
+    )
+    parser.add_argument(
+        "--limit",
+        action="store",
+        type=int,
+        default=100,
+        help="Maximum number or employees to return (default: 100)",
+    )
+    parser.add_argument(
+        "--sort",
+        action="store",
+        type=str,
+        default=None,
+        help='Sort by "name" (ascending) or by "height" (descending) (default: no sorting)',
+    )
+    parser.add_argument(
+        "query", action="store", help="The name or partial name to search for"
+    )
+    return parser.parse_args()
+
+
+def main() -> None:
+    args = parse_args()
+
+    # initiate the default connection to elasticsearch
+    connections.create_connection(hosts=[os.environ["ELASTICSEARCH_URL"]])
+
+    if args.recreate_index or not Employee._index.exists():
+        create()
+    Employee.init()
+
+    search(args.query, args.limit, args.sort)
+
+    # close the connection
+    connections.get_connection().close()
+
+
+if __name__ == "__main__":
+    main()
diff -pruN 9.1.0-3/test_elasticsearch/test_dsl/test_integration/test_examples/examples/semantic_text.py 9.1.1-1/test_elasticsearch/test_dsl/test_integration/test_examples/examples/semantic_text.py
--- 9.1.0-3/test_elasticsearch/test_dsl/test_integration/test_examples/examples/semantic_text.py	2025-07-30 08:51:18.000000000 +0000
+++ 9.1.1-1/test_elasticsearch/test_dsl/test_integration/test_examples/examples/semantic_text.py	2025-09-12 13:23:45.000000000 +0000
@@ -21,7 +21,7 @@
 
 Requirements:
 
-$ pip install "elasticsearch" tqdm
+$ pip install elasticsearch tqdm
 
 Before running this example, an ELSER inference endpoint must be created in the
 Elasticsearch cluster. This can be done manually from Kibana, or with the
diff -pruN 9.1.0-3/test_elasticsearch/test_dsl/test_integration/test_examples/examples/sparse_vectors.py 9.1.1-1/test_elasticsearch/test_dsl/test_integration/test_examples/examples/sparse_vectors.py
--- 9.1.0-3/test_elasticsearch/test_dsl/test_integration/test_examples/examples/sparse_vectors.py	2025-07-30 08:51:18.000000000 +0000
+++ 9.1.1-1/test_elasticsearch/test_dsl/test_integration/test_examples/examples/sparse_vectors.py	2025-09-12 13:23:45.000000000 +0000
@@ -20,7 +20,7 @@
 
 Requirements:
 
-$ pip install nltk tqdm "elasticsearch"
+$ pip install nltk tqdm elasticsearch
 
 Before running this example, the ELSER v2 model must be downloaded and deployed
 to the Elasticsearch cluster, and an ingest pipeline must be defined. This can
diff -pruN 9.1.0-3/test_elasticsearch/test_dsl/test_integration/test_examples/examples/vectors.py 9.1.1-1/test_elasticsearch/test_dsl/test_integration/test_examples/examples/vectors.py
--- 9.1.0-3/test_elasticsearch/test_dsl/test_integration/test_examples/examples/vectors.py	2025-07-30 08:51:18.000000000 +0000
+++ 9.1.1-1/test_elasticsearch/test_dsl/test_integration/test_examples/examples/vectors.py	2025-09-12 13:23:45.000000000 +0000
@@ -20,7 +20,7 @@
 
 Requirements:
 
-$ pip install nltk sentence_transformers tqdm "elasticsearch"
+$ pip install nltk sentence_transformers tqdm elasticsearch
 
 To run the example:
 
diff -pruN 9.1.0-3/test_elasticsearch/test_esql.py 9.1.1-1/test_elasticsearch/test_esql.py
--- 9.1.0-3/test_elasticsearch/test_esql.py	2025-07-30 08:51:18.000000000 +0000
+++ 9.1.1-1/test_elasticsearch/test_esql.py	2025-09-12 13:23:45.000000000 +0000
@@ -84,7 +84,7 @@ def test_completion():
     assert (
         query.render()
         == """ROW question = "What is Elasticsearch?"
-| COMPLETION question WITH test_completion_model
+| COMPLETION question WITH {"inference_id": "test_completion_model"}
 | KEEP question, completion"""
     )
 
@@ -97,7 +97,7 @@ def test_completion():
     assert (
         query.render()
         == """ROW question = "What is Elasticsearch?"
-| COMPLETION answer = question WITH test_completion_model
+| COMPLETION answer = question WITH {"inference_id": "test_completion_model"}
 | KEEP question, answer"""
     )
 
@@ -128,7 +128,7 @@ def test_completion():
       "Synopsis: ", synopsis, "\\n",
       "Actors: ", MV_CONCAT(actors, ", "), "\\n",
   )
-| COMPLETION summary = prompt WITH test_completion_model
+| COMPLETION summary = prompt WITH {"inference_id": "test_completion_model"}
 | KEEP title, summary, rating"""
     )
 
@@ -160,7 +160,7 @@ def test_completion():
 | SORT rating DESC
 | LIMIT 10
 | EVAL prompt = CONCAT("Summarize this movie using the following information: \\n", "Title: ", title, "\\n", "Synopsis: ", synopsis, "\\n", "Actors: ", MV_CONCAT(actors, ", "), "\\n")
-| COMPLETION summary = prompt WITH test_completion_model
+| COMPLETION summary = prompt WITH {"inference_id": "test_completion_model"}
 | KEEP title, summary, rating"""
     )
 
@@ -713,3 +713,11 @@ def test_match_operator():
         == """FROM books
 | WHERE author:"Faulkner\""""
     )
+
+
+def test_parameters():
+    query = ESQL.from_("employees").where("name == ?")
+    assert query.render() == "FROM employees\n| WHERE name == ?"
+
+    query = ESQL.from_("employees").where(E("name") == E("?"))
+    assert query.render() == "FROM employees\n| WHERE name == ?"
diff -pruN 9.1.0-3/test_elasticsearch/test_server/test_rest_api_spec.py 9.1.1-1/test_elasticsearch/test_server/test_rest_api_spec.py
--- 9.1.0-3/test_elasticsearch/test_server/test_rest_api_spec.py	2025-07-30 08:51:18.000000000 +0000
+++ 9.1.1-1/test_elasticsearch/test_server/test_rest_api_spec.py	2025-09-12 13:23:45.000000000 +0000
@@ -78,6 +78,7 @@ FAILING_TESTS = {
     "cluster/voting_config_exclusions",
     "entsearch/10_basic",
     "indices/clone",
+    "indices/data_stream_mappings[0]",
     "indices/resolve_cluster",
     "indices/settings",
     "indices/split",
@@ -494,7 +495,7 @@ YAML_TEST_SPECS = []
 # Try loading the REST API test specs from the Elastic Artifacts API
 try:
     # Construct the HTTP and Elasticsearch client
-    http = urllib3.PoolManager(retries=10)
+    http = urllib3.PoolManager(retries=urllib3.Retry(total=10))
 
     yaml_tests_url = (
         "https://api.github.com/repos/elastic/elasticsearch-clients-tests/zipball/main"
diff -pruN 9.1.0-3/test_elasticsearch/utils.py 9.1.1-1/test_elasticsearch/utils.py
--- 9.1.0-3/test_elasticsearch/utils.py	2025-07-30 08:51:18.000000000 +0000
+++ 9.1.1-1/test_elasticsearch/utils.py	2025-09-12 13:23:45.000000000 +0000
@@ -179,7 +179,7 @@ def wipe_data_streams(client):
 def wipe_indices(client):
     indices = client.cat.indices().strip().splitlines()
     if len(indices) > 0:
-        index_names = [i.split(" ")[2] for i in indices]
+        index_names = [i.split()[2] for i in indices]
         client.options(ignore_status=404).indices.delete(
             index=",".join(index_names),
             expand_wildcards="all",
diff -pruN 9.1.0-3/utils/build-dists.py 9.1.1-1/utils/build-dists.py
--- 9.1.0-3/utils/build-dists.py	2025-07-30 08:51:18.000000000 +0000
+++ 9.1.1-1/utils/build-dists.py	2025-09-12 13:23:45.000000000 +0000
@@ -121,6 +121,7 @@ def test_dist(dist):
                 "--install-types",
                 "--non-interactive",
                 "--ignore-missing-imports",
+                "--implicit-reexport",
                 os.path.join(base_dir, "test_elasticsearch/test_types/async_types.py"),
             )
 
@@ -145,6 +146,7 @@ def test_dist(dist):
                 "--install-types",
                 "--non-interactive",
                 "--ignore-missing-imports",
+                "--implicit-reexport",
                 os.path.join(base_dir, "test_elasticsearch/test_types/sync_types.py"),
             )
         else:
@@ -156,6 +158,7 @@ def test_dist(dist):
                 "--install-types",
                 "--non-interactive",
                 "--ignore-missing-imports",
+                "--implicit-reexport",
                 os.path.join(
                     base_dir, "test_elasticsearch/test_types/aliased_types.py"
                 ),
diff -pruN 9.1.0-3/utils/run-unasync-dsl.py 9.1.1-1/utils/run-unasync-dsl.py
--- 9.1.0-3/utils/run-unasync-dsl.py	2025-07-30 08:51:18.000000000 +0000
+++ 9.1.1-1/utils/run-unasync-dsl.py	2025-09-12 13:23:45.000000000 +0000
@@ -121,7 +121,7 @@ def main(check=False):
                 [
                     "sed",
                     "-i.bak",
-                    "s/elasticsearch\\[async\\]/elasticsearch/",
+                    's/"elasticsearch\\[async\\]"/elasticsearch/',
                     f"{output_dir}{file}",
                 ]
             )
diff -pruN 9.1.0-3/utils/templates/field.py.tpl 9.1.1-1/utils/templates/field.py.tpl
--- 9.1.0-3/utils/templates/field.py.tpl	2025-07-30 08:51:18.000000000 +0000
+++ 9.1.1-1/utils/templates/field.py.tpl	2025-09-12 13:23:45.000000000 +0000
@@ -119,9 +119,16 @@ class Field(DslBase):
     def __getitem__(self, subfield: str) -> "Field":
         return cast(Field, self._params.get("fields", {})[subfield])
 
-    def _serialize(self, data: Any) -> Any:
+    def _serialize(self, data: Any, skip_empty: bool) -> Any:
         return data
 
+    def _safe_serialize(self, data: Any, skip_empty: bool) -> Any:
+        try:
+            return self._serialize(data, skip_empty)
+        except TypeError:
+            # older method signature, without skip_empty
+            return self._serialize(data)  # type: ignore[call-arg]
+
     def _deserialize(self, data: Any) -> Any:
         return data
 
@@ -133,10 +140,10 @@ class Field(DslBase):
             return AttrList([])
         return self._empty()
 
-    def serialize(self, data: Any) -> Any:
+    def serialize(self, data: Any, skip_empty: bool = True) -> Any:
         if isinstance(data, (list, AttrList, tuple)):
-            return list(map(self._serialize, cast(Iterable[Any], data)))
-        return self._serialize(data)
+            return list(map(self._safe_serialize, cast(Iterable[Any], data), [skip_empty] * len(data)))
+        return self._safe_serialize(data, skip_empty)
 
     def deserialize(self, data: Any) -> Any:
         if isinstance(data, (list, AttrList, tuple)):
@@ -186,7 +193,7 @@ class RangeField(Field):
         data = {k: self._core_field.deserialize(v) for k, v in data.items()}  # type: ignore[union-attr]
         return Range(data)
 
-    def _serialize(self, data: Any) -> Optional[Dict[str, Any]]:
+    def _serialize(self, data: Any, skip_empty: bool) -> Optional[Dict[str, Any]]:
         if data is None:
             return None
         if not isinstance(data, collections.abc.Mapping):
@@ -318,7 +325,7 @@ class {{ k.name }}({{ k.parent }}):
         return self._wrap(data)
 
     def _serialize(
-        self, data: Optional[Union[Dict[str, Any], "InnerDoc"]]
+        self, data: Optional[Union[Dict[str, Any], "InnerDoc"]], skip_empty: bool
     ) -> Optional[Dict[str, Any]]:
         if data is None:
             return None
@@ -327,7 +334,7 @@ class {{ k.name }}({{ k.parent }}):
         if isinstance(data, collections.abc.Mapping):
             return data
 
-        return data.to_dict()
+        return data.to_dict(skip_empty=skip_empty)
 
     def clean(self, data: Any) -> Any:
         data = super().clean(data)
@@ -433,7 +440,7 @@ class {{ k.name }}({{ k.parent }}):
         # the ipaddress library for pypy only accepts unicode.
         return ipaddress.ip_address(unicode(data))
 
-    def _serialize(self, data: Any) -> Optional[str]:
+    def _serialize(self, data: Any, skip_empty: bool) -> Optional[str]:
         if data is None:
             return None
         return str(data)        
@@ -448,7 +455,7 @@ class {{ k.name }}({{ k.parent }}):
     def _deserialize(self, data: Any) -> bytes:
         return base64.b64decode(data)
 
-    def _serialize(self, data: Any) -> Optional[str]:
+    def _serialize(self, data: Any, skip_empty: bool) -> Optional[str]:
         if data is None:
             return None
         return base64.b64encode(data).decode()        
@@ -458,7 +465,7 @@ class {{ k.name }}({{ k.parent }}):
     def _deserialize(self, data: Any) -> "Query":
         return Q(data)  # type: ignore[no-any-return]
 
-    def _serialize(self, data: Any) -> Optional[Dict[str, Any]]:
+    def _serialize(self, data: Any, skip_empty: bool) -> Optional[Dict[str, Any]]:
         if data is None:
             return None
         return data.to_dict()  # type: ignore[no-any-return]
