AsyncIndex

Obtain an AsyncIndex via pinecone.AsyncPinecone.index().

from pinecone import AsyncPinecone

pc = AsyncPinecone(api_key="your-api-key")

# Resolve host automatically by index name
async with await pc.index("my-index") as idx:
    stats = await idx.describe_index_stats()

# — or — connect directly with a host URL
async with AsyncIndex(host="my-index-abc123.svc.pinecone.io", api_key="...") as idx:
    stats = await idx.describe_index_stats()

AsyncIndex mirrors Index but every method is an async def. It is an async context manager; call close() (or use async with) to release the underlying HTTP connection pool.

Method groups:

class pinecone.async_client.async_index.AsyncIndex(*, host, api_key=None, additional_headers=None, timeout=30.0, proxy_url=None, proxy_headers=None, ssl_ca_certs=None, ssl_verify=True, source_tag=None, connection_pool_maxsize=0)[source]

Bases: object

Asynchronous data plane client targeting a specific Pinecone index.

Can be constructed directly with a host URL, or via the AsyncPinecone.index() factory method.

Parameters:
  • host (str) – The index-specific data plane host URL.

  • api_key (str | None) – Pinecone API key. Falls back to PINECONE_API_KEY env var.

  • additional_headers (dict[str, str] | None) – Extra headers included in every request.

  • timeout (float) – Request timeout in seconds. Defaults to 30.0.

  • proxy_url (str | None) – HTTP proxy URL for outgoing requests.

  • ssl_ca_certs (str | None) – Path to a CA certificate bundle for SSL verification.

  • ssl_verify (bool) – Whether to verify SSL certificates. Defaults to True.

  • source_tag (str | None) – Tag appended to the User-Agent string for request attribution.

  • connection_pool_maxsize (int) – Maximum number of connections to keep in the pool. 0 (default) uses httpx defaults.

  • proxy_headers (dict[str, str] | None)

Raises:

PineconeValueError – If no API key can be resolved or the host is invalid.

Examples

from pinecone import AsyncIndex

async with AsyncIndex(host="my-index-abc123.svc.pinecone.io", api_key="...") as idx:
    print(idx.host)
__init__(*, host, api_key=None, additional_headers=None, timeout=30.0, proxy_url=None, proxy_headers=None, ssl_ca_certs=None, ssl_verify=True, source_tag=None, connection_pool_maxsize=0)[source]
Parameters:
  • host (str)

  • api_key (str | None)

  • additional_headers (dict[str, str] | None)

  • timeout (float)

  • proxy_url (str | None)

  • proxy_headers (dict[str, str] | None)

  • ssl_ca_certs (str | None)

  • ssl_verify (bool)

  • source_tag (str | None)

  • connection_pool_maxsize (int)

Return type:

None

property host: str

The data plane host URL for this index.

async upsert_records(*, records, namespace, timeout=None)[source]

Upsert records for indexes with integrated inference.

Records are sent as newline-delimited JSON (NDJSON). Embeddings are generated server-side.

Parameters:
  • records (list[dict[str, Any]]) – List of record dicts. Each must contain an _id or id field. Additional fields are passed through for server-side embedding.

  • namespace (str) – Target namespace (required). Unlike upsert(), namespace has no default because the records API requires an explicit namespace (must be non-empty).

  • timeout (float | None)

Returns:

UpsertRecordsResponse with the count of records submitted.

Raises:
  • PineconeValueError – If namespace is not a string or is empty/whitespace, records is empty, or a record is missing an identifier field.

  • ApiError – If the API returns an error response.

  • PineconeConnectionError – If a network-level connection fails (DNS, refused, transport error).

  • PineconeTimeoutError – If the request exceeds the configured timeout.

Return type:

UpsertRecordsResponse

Examples

response = await idx.upsert_records(
    namespace="articles-en",
    records=[
        {
            "_id": "article-101",
            "text": "Vector databases enable similarity search.",
        },
        {"_id": "article-102", "text": "RAG combines search with LLMs."},
    ],
)
print(response.record_count)

See also

  • upsert() — for indexes where you provide your own vectors (no server-side embedding).

  • start_import() — for bulk loading millions of vectors from cloud storage (S3, GCS).

async upsert(*, vectors, namespace='', batch_size=None, show_progress=True, max_concurrency=4, timeout=None)[source]

Upsert a batch of vectors into a namespace.

If a vector with the same ID already exists in the namespace, it is overwritten.

Parameters:
  • vectors (Sequence[Vector | tuple[str, list[float]] | tuple[str, list[float], dict[str, Any]] | dict[str, Any]]) – Sequence of vectors to upsert. Each element can be a Vector instance, a tuple of (id, values) or (id, values, metadata), or a dict with id, values, and optional sparse_values / metadata keys.

  • namespace (str) – Target namespace. Defaults to the default (empty-string) namespace.

  • batch_size (int | None) – Split vectors into chunks of this size and send one request per chunk. Default None sends a single request (current behaviour). Must be a positive integer if provided.

  • show_progress (bool) – When True and tqdm is installed, display a progress bar across batches. Has no effect when batch_size is None or tqdm is not installed. Defaults to True.

  • max_concurrency (int) – Asyncio concurrency limit for concurrent batch requests (range 1–64, default 4). Only used when batch_size is set.

  • timeout (float | None) – Per-request timeout in seconds. Overrides the client-level default for this call only.

Returns:

UpsertResponse with the count of vectors upserted. When batch_size triggers multiple requests, response_info carries the aggregate LSN from all successful batches (or None if no LSN headers were returned).

Raises:
Return type:

UpsertResponse

Examples

from pinecone import Vector

response = await idx.upsert(
    vectors=[
        Vector(
            id="article-101",
            values=[0.012, -0.087, 0.153],  # truncated; use your actual dimension
        ),
        ("article-102", [0.045, 0.021, -0.064]),  # truncated
        {"id": "article-103", "values": [0.091, -0.032, 0.178]},  # truncated
    ],
    namespace="articles-en",
)
print(response.upserted_count)

# Upsert 1000 vectors in batches of 100
response = await idx.upsert(
    vectors=large_vector_list,
    batch_size=100,
    show_progress=True,
)
print(response.upserted_count)

Note

When batch_size is set, batches are submitted concurrently via an asyncio.Semaphore of max_concurrency slots (default 4, range 1–64). Per-batch HTTP retries are handled by the client’s configured RetryConfig. Partial failures do not raise — per-batch errors are captured on the returned UpsertResponse (see response.has_errors, response.errors, response.failed_items). To retry only the failures, pass response.failed_items back to upsert(...).

See also

  • upsert_records() — for indexes with integrated inference (text in, server-side embedding).

  • start_import() — for bulk loading millions of vectors from cloud storage (S3, GCS).

async upsert_from_dataframe(df, namespace=None, batch_size=500, show_progress=True)[source]

Not supported for async clients.

This method is a known limitation of the async client. Instead, batch your data and call upsert() in a loop. For very large datasets, use start_import() for bulk loading from cloud storage.

Raises:
Parameters:
  • df (pd.DataFrame)

  • namespace (str | None)

  • batch_size (int)

  • show_progress (bool)

Return type:

UpsertResponse

async query(*, top_k, vector=None, id=None, namespace='', filter=None, include_values=False, include_metadata=False, sparse_vector=None, scan_factor=None, max_candidates=None, timeout=None)[source]

Query a namespace for the nearest neighbors of a vector.

Parameters:
  • top_k (int) – Number of results to return (must be >= 1).

  • vector (list[float] | None) – Dense query vector values.

  • id (str | None) – ID of a stored vector to use as the query.

  • namespace (str) – Namespace to query. Defaults to the default namespace.

  • filter (dict[str, Any] | None) – Metadata filter expression.

  • include_values (bool) – Whether to include vector values in results.

  • include_metadata (bool) – Whether to include metadata in results.

  • sparse_vector (SparseValues | dict[str, Any] | None) – Sparse query vector with indices and values.

  • scan_factor (float | None) – DRN optimization — adjusts how much of the index is scanned. Range 0.5–4.0. Only supported for dedicated read node indexes. None uses server default.

  • max_candidates (int | None) – DRN optimization — caps candidate vectors to rerank. Range 1–100000. Only supported for dedicated read node indexes. None uses server default.

  • timeout (float | None)

Returns:

QueryResponse with matches, namespace, and usage info.

Raises:
  • PineconeValueError – If top_k < 1, both vector and id are provided, or none of vector, id, or sparse_vector are provided.

  • ApiError – If the API returns an error response.

  • PineconeConnectionError – If a network-level connection fails (DNS, refused, transport error).

  • PineconeTimeoutError – If the request exceeds the configured timeout.

Return type:

QueryResponse

Examples

response = await idx.query(
    top_k=10,
    vector=[0.012, -0.087, 0.153],  # truncated; use your actual dimension
)
for match in response.matches:
    print(match.id, match.score)

Query with a metadata filter:

response = await idx.query(
    top_k=10,
    vector=[0.012, -0.087, 0.153],
    filter={"genre": "comedy", "year": {"$gte": 2020}},
    namespace="movies-en",
)
async query_namespaces(*, vector=None, namespaces, metric, top_k=None, filter=None, include_values=False, include_metadata=False, sparse_vector=None, scan_factor=None, max_candidates=None, timeout=None)[source]

Query multiple namespaces concurrently and return merged top results.

Fans out individual query() calls across all given namespaces using asyncio.gather, then merges results via a heap-based aggregator that returns the overall top-k matches ranked by the specified metric.

Parameters:
  • vector (list[float] | None) – Dense query vector values. Required for dense and hybrid indexes; omit for sparse-only indexes (use sparse_vector instead).

  • namespaces (list[str]) – Namespaces to query (must be non-empty). Duplicates are removed while preserving order.

  • metric (str) – Distance metric — "cosine", "euclidean", or "dotproduct".

  • top_k (int | None) – Maximum number of results to return. Defaults to 10.

  • filter (dict[str, Any] | None) – Metadata filter expression applied to every namespace.

  • include_values (bool) – Whether to include vector values in results.

  • include_metadata (bool) – Whether to include metadata in results.

  • sparse_vector (SparseValues | dict[str, Any] | None) – Sparse query vector with indices and values. Required for sparse-only indexes when vector is omitted.

  • scan_factor (float | None) – DRN performance tuning — controls how much of the index is scanned during a query. Higher values scan more data and may improve recall at the cost of latency.

  • max_candidates (int | None) – DRN performance tuning — maximum number of candidate vectors to consider during the search phase.

  • timeout (float | None)

Returns:

QueryNamespacesResults with the merged top-k matches, total usage, and per-namespace usage.

Raises:
Return type:

QueryNamespacesResults

Examples

# Dense query
results = await idx.query_namespaces(
    vector=[0.012, -0.087, 0.153],  # truncated; use your actual dimension
    namespaces=["articles-en", "articles-fr", "articles-de"],
    metric="cosine",
    top_k=10,
)

# Sparse-only query (sparse index)
results = await idx.query_namespaces(
    sparse_vector={"indices": [0, 1, 2], "values": [0.1, 0.2, 0.3]},
    namespaces=["docs-en", "docs-fr"],
    metric="dotproduct",
    top_k=10,
)

for match in results.matches:
    print(match.id, match.score)
async fetch(*, ids, namespace='', timeout=None)[source]

Fetch vectors by their IDs from a namespace.

Parameters:
  • ids (list[str]) – List of vector IDs to fetch (must be non-empty).

  • namespace (str) – Namespace to fetch from. Defaults to the default namespace.

  • timeout (float | None)

Returns:

FetchResponse with a map of vector IDs to Vector objects, namespace, and usage info. IDs that do not exist are omitted from the map rather than raising an error.

Raises:
Return type:

FetchResponse

Examples

response = await idx.fetch(ids=["article-101", "article-102"])
for vid, vec in response.vectors.items():
    print(vid, vec.values)
async fetch_by_metadata(*, filter, namespace='', limit=None, pagination_token=None, timeout=None)[source]

Fetch vectors matching a metadata filter expression.

Returns vectors whose metadata satisfies the given filter, with pagination support. The server returns up to 100 vectors per page when no limit is specified.

Parameters:
  • filter (dict[str, Any]) – Metadata filter expression (required).

  • namespace (str) – Namespace to fetch from. Defaults to the default namespace.

  • limit (int | None) – Maximum number of vectors to return per page. When None, the server default (100) is used.

  • pagination_token (str | None) – Token from a previous response to fetch the next page. When None, fetches the first page.

  • timeout (float | None)

Returns:

FetchByMetadataResponse with matched vectors, namespace, usage, and pagination token for the next page (if any).

Raises:

ApiError – If the API returns an error response (e.g. authentication failure or server error).

Return type:

FetchByMetadataResponse

Examples

response = await idx.fetch_by_metadata(
    filter={"genre": {"$eq": "comedy"}},
    namespace="movies",
)
for vid, vec in response.vectors.items():
    print(vid, vec.values)

# Paginate through all results
token = response.pagination.next if response.pagination else None
while token:
    response = await idx.fetch_by_metadata(
        filter={"genre": {"$eq": "comedy"}},
        namespace="movies",
        pagination_token=token,
    )
    token = response.pagination.next if response.pagination else None
async delete(*, ids=None, delete_all=False, filter=None, namespace='', timeout=None)[source]

Delete vectors from a namespace by ID, filter, or delete-all flag.

Exactly one of ids, delete_all, or filter must be specified. Deleting IDs that do not exist does not raise an error.

Parameters:
  • ids (list[str] | None) – List of vector IDs to delete.

  • delete_all (bool) – If True, delete all vectors in the namespace.

  • filter (dict[str, Any] | None) – Metadata filter expression selecting vectors to delete.

  • namespace (str) – Namespace to delete from. Defaults to the default namespace.

  • timeout (float | None)

Returns:

None — a successful delete returns no payload.

Raises:
Return type:

None

Examples

# Delete by IDs
await idx.delete(ids=["article-101", "article-102"])

# Delete all vectors in a namespace
await idx.delete(delete_all=True, namespace="articles-deprecated")

# Delete by metadata filter
await idx.delete(filter={"category": {"$eq": "obsolete"}})
async update(*, id=None, values=None, sparse_values=None, set_metadata=None, namespace='', filter=None, dry_run=False, timeout=None)[source]

Update vectors by ID or metadata filter.

Exactly one of id or filter must be specified.

Parameters:
  • id (str | None) – ID of the vector to update.

  • values (list[float] | None) – New dense vector values.

  • sparse_values (SparseValues | dict[str, Any] | None) – New sparse vector.

  • set_metadata (dict[str, Any] | None) – Metadata fields to set or overwrite.

  • namespace (str) – Namespace to target. Defaults to the default namespace.

  • filter (dict[str, Any] | None) – Metadata filter expression selecting vectors to update.

  • dry_run (bool) – If True, return the count of records that would be affected without applying changes.

  • timeout (float | None)

Returns:

UpdateResponse with matched_records count (when available).

Raises:
Return type:

UpdateResponse

Examples

# Update by ID
# truncated values; use your actual dimension
await idx.update(id="article-101", values=[0.012, -0.087, 0.153])

# Bulk-update metadata by filter
await idx.update(
    filter={"genre": {"$eq": "drama"}},
    set_metadata={"year": 2020},
)
async search(*, namespace, top_k, inputs=None, vector=None, id=None, filter=None, fields=None, rerank=None, match_terms=None, timeout=None)[source]

Search records by text, vector, or ID with optional reranking.

Searches a namespace using integrated inference (text inputs embedded server-side), a raw vector, or an existing record ID as the query.

Parameters:
  • namespace (str) – Namespace to search in (required).

  • top_k (int) – Number of results to return (must be >= 1).

  • inputs (SearchInputs | dict[str, Any] | None) – Inputs for server-side embedding (e.g. {"text": "query text"}). Use SearchInputs for typed key validation and IDE autocompletion (e.g. SearchInputs(text="query text")).

  • vector (list[float] | None) – Dense query vector values.

  • id (str | None) – ID of an existing record to use as the query.

  • filter (dict[str, Any] | None) – Metadata filter expression.

  • fields (list[str] | None) – Field names to include in results. When None, the server returns all available fields.

  • rerank (RerankConfig | dict[str, Any] | None) – Reranking configuration with model (required), rank_fields (required), and optional top_n, parameters, query keys. Use RerankConfig for IDE autocompletion.

  • match_terms (dict[str, Any] | None) – Term-matching constraint for sparse search. Requires keys "strategy" (currently only "all") and "terms" (list of strings). Only supported for sparse indexes using pinecone-sparse-english-v0. None disables term matching.

  • timeout (float | None)

Returns:

SearchRecordsResponse with hits and usage statistics.

Raises:
Return type:

SearchRecordsResponse

Examples

response = await idx.search(
    namespace="articles-en",
    top_k=10,
    inputs={"text": "benefits of vector databases for search"},
)
for hit in response.result.hits:
    print(hit.id, hit.score)
async search_records(*, namespace, top_k, inputs=None, vector=None, id=None, filter=None, fields=None, rerank=None, match_terms=None, timeout=None)[source]

Alias for search().

Prefer calling search() directly — this alias exists for backwards compatibility.

Parameters:
Return type:

SearchRecordsResponse

async list_paginated(*, prefix=None, limit=None, pagination_token=None, namespace='', timeout=None)[source]

Fetch a single page of vector IDs from a namespace.

Parameters:
  • prefix (str | None) – Return only IDs starting with this prefix.

  • limit (int | None) – Maximum number of IDs to return in this page.

  • pagination_token (str | None) – Token from a previous response to fetch the next page.

  • namespace (str) – Namespace to list from. Defaults to the default namespace.

  • timeout (float | None)

Returns:

ListResponse with vector IDs, pagination info, namespace, and usage.

Raises:
Return type:

ListResponse

Examples

response = await idx.list_paginated(prefix="doc1#", limit=50)
for item in response.vectors:
    print(item.id)
async list(*, prefix=None, limit=None, namespace='', timeout=None)[source]

List vector IDs in a namespace, automatically following pagination.

Yields one ListResponse per page. The generator automatically follows pagination tokens until all pages have been retrieved.

Parameters:
  • prefix (str | None) – Return only IDs starting with this prefix.

  • limit (int | None) – Maximum number of IDs to return per page.

  • namespace (str) – Namespace to list from. Defaults to the default namespace.

  • timeout (float | None)

Yields:

ListResponse for each page of results.

Return type:

AsyncIterator[ListResponse]

Examples

async for page in idx.list(prefix="doc1#"):
    for item in page.vectors:
        print(item.id)
async describe_index_stats(*, filter=None, timeout=None)[source]

Return statistics for this index.

Returns aggregate statistics including total vector count, per-namespace vector counts, dimension, and index fullness.

Parameters:
  • filter (dict[str, Any] | None) – Metadata filter expression. When provided, only vectors matching the filter are counted.

  • timeout (float | None)

Returns:

DescribeIndexStatsResponse with namespace summaries, dimension, total vector count, and fullness metrics.

Raises:
Return type:

DescribeIndexStatsResponse

Examples

stats = await idx.describe_index_stats()
print(stats.total_vector_count, stats.dimension)

# With filter — only count vectors matching the expression
stats = await idx.describe_index_stats(
    filter={"genre": {"$eq": "drama"}}
)
async create_namespace(*, name, schema=None)[source]

Create a named namespace in the index.

Parameters:
  • name (str) – Name for the new namespace (must be non-empty).

  • schema (dict[str, Any] | None) – Optional schema configuration with metadata field indexing settings.

Returns:

NamespaceDescription with the namespace name and record count.

Raises:
  • PineconeValueError – If the name is not a string or is empty/whitespace.

  • ApiError – If the API returns an error response (e.g. 409 conflict when namespace already exists).

  • PineconeConnectionError – If a network-level connection fails (DNS, refused, transport error).

  • PineconeTimeoutError – If the request exceeds the configured timeout.

Return type:

NamespaceDescription

Examples

ns = await idx.create_namespace(name="my-ns")
print(ns.name, ns.record_count)
async describe_namespace(*, name=None, **kwargs)[source]

Describe a namespace by name.

Parameters:
  • name (str) – Name of the namespace to describe.

  • kwargs (str)

Returns:

NamespaceDescription with the namespace name, record count, and schema information.

Raises:
Return type:

NamespaceDescription

Examples

ns = await idx.describe_namespace(name="my-ns")
print(ns.name, ns.record_count)
async delete_namespace(*, name=None, timeout=None, **kwargs)[source]

Delete a namespace by name, removing all its vectors.

Parameters:
  • name (str) – Name of the namespace to delete.

  • timeout (float | None)

  • kwargs (str)

Returns:

None — a successful delete returns no payload.

Raises:
Return type:

None

Examples

await idx.delete_namespace(name="old-data")
async list_namespaces_paginated(*, prefix=None, limit=None, pagination_token=None)[source]

Fetch a single page of namespace descriptions.

Parameters:
  • prefix (str | None) – Return only namespaces whose names start with this prefix.

  • limit (int | None) – Maximum number of namespaces to return in this page.

  • pagination_token (str | None) – Token from a previous response to fetch the next page.

Returns:

ListNamespacesResponse with namespace descriptions, pagination info, and total count.

Raises:

ApiError – If the API returns an error response.

Return type:

ListNamespacesResponse

Examples

response = await idx.list_namespaces_paginated(prefix="prod-", limit=10)
for ns in response.namespaces:
    print(ns.name, ns.record_count)
async list_namespaces(*, prefix=None, limit=None)[source]

List namespaces, automatically following pagination.

Yields one ListNamespacesResponse per page. The generator automatically follows pagination tokens until all pages have been retrieved.

Parameters:
  • prefix (str | None) – Return only namespaces whose names start with this prefix.

  • limit (int | None) – Maximum number of namespaces to return per page.

Yields:

ListNamespacesResponse for each page of results.

Return type:

AsyncIterator[ListNamespacesResponse]

Examples

async for page in idx.list_namespaces(prefix="prod-"):
    for ns in page.namespaces:
        print(ns.name, ns.record_count)
async start_import(uri, *, error_mode='continue', integration_id=None)[source]

Start a bulk import operation from an external data source.

Initiates an asynchronous bulk import of vectors from cloud storage into the index. The import runs server-side; use describe_import() to poll for progress and completion.

Note

The import URI must point to a directory of Parquet files in cloud storage (s3:// or gs://). Each Parquet file must follow the Pinecone-required schema. See Pinecone import docs for the required Parquet schema and supported storage formats.

Parameters:
  • uri (str) – Source URI for the import data (e.g. "s3://my-bucket/vectors/" or "gs://my-bucket/vectors/").

  • error_mode (str) – How to handle errors during import. Must be "continue" (default) or "abort". Case-insensitive.

  • integration_id (str | None) – Optional integration ID for the import.

Returns:

StartImportResponse with the ID of the created import operation.

Raises:
Return type:

StartImportResponse

Examples

import asyncio

# Start an import and poll until complete
response = await idx.start_import(uri="s3://my-bucket/vectors/")
import_id = response.id

import_op = await idx.describe_import(import_id)
while import_op.status not in ("Completed", "Failed", "Cancelled"):
    await asyncio.sleep(10)
    import_op = await idx.describe_import(import_id)
print(f"Status: {import_op.status}, records imported: {import_op.records_imported}")
# Abort on first error instead of continuing
response = await idx.start_import(
    uri="s3://my-bucket/vectors/",
    error_mode="abort",
)

See also

  • upsert() — for upserting vectors directly in small batches (single request per call).

  • upsert_records() — for indexes with integrated inference (text in, server-side embedding).

async describe_import(id)[source]

Describe a bulk import operation by ID.

Parameters:

id (str | int) – Import operation ID. Integers are converted to strings silently.

Returns:

ImportModel with the import operation details.

Raises:
Return type:

ImportModel

Examples

import_op = await idx.describe_import("import-123")
print(import_op.status, import_op.percent_complete)
async cancel_import(id)[source]

Cancel a bulk import operation by ID.

Parameters:

id (str | int) – Import operation ID. Integers are converted to strings silently.

Returns:

None — a successful cancellation returns no payload.

Raises:
Return type:

None

Examples

await idx.cancel_import("import-123")
async list_imports(*, limit=None, pagination_token=None)[source]

List bulk import operations, automatically following pagination.

Yields individual ImportModel objects, fetching additional pages transparently until all results have been returned.

Parameters:
  • limit (int | None) – Maximum number of imports per page (max 100, server default 100).

  • pagination_token (str | None) – Token to resume pagination from a previous call.

Yields:

ImportModel for each import operation.

Raises:

ApiError – If the API returns an error response.

Return type:

AsyncIterator[ImportModel]

Examples

async for imp in idx.list_imports():
    print(imp.id, imp.status)
async list_imports_paginated(*, limit=None, pagination_token=None)[source]

Fetch a single page of bulk import operations.

Returns an ImportList for one page. The caller is responsible for managing the pagination token.

Parameters:
  • limit (int | None) – Maximum number of imports to return in this page.

  • pagination_token (str | None) – Token from a previous response to fetch the next page.

Returns:

ImportList with the import operations for the requested page.

Raises:

ApiError – If the API returns an error response.

Return type:

ImportList

Examples

page = await idx.list_imports_paginated(limit=10)
for imp in page:
    print(imp.id, imp.status)
async close()[source]

Close the underlying HTTP client and release resources.

Return type:

None

async __aenter__()[source]
Return type:

AsyncIndex

async __aexit__(*args)[source]
Parameters:

args (Any)

Return type:

None