Give AlbumentationsX a star on GitHub — it powers this leaderboard

Star on GitHub

turbopuffer

The official Python library for the turbopuffer API

Rank: #3009Downloads: 1,689,841 (30 days)Stars: 109Forks: 9

Description

turbopuffer Python API library <a href="https://turbopuffer.com"><img src="https://github.com/user-attachments/assets/8d6cca4c-10b7-4d3a-9782-696053baf44e" align="right"></a>

<!-- prettier-ignore -->

<a href="https://pypi.org/project/turbopuffer/"><img src="https://img.shields.io/pypi/v/turbopuffer.svg?label=pypi%20(stable)" alt="PyPI version" align="right"></a>

The turbopuffer Python library provides convenient access to the Turbopuffer HTTP API from any Python 3.9+ application. The library includes type definitions for all request params and response fields, and offers both synchronous and asynchronous clients powered by httpx.

It is generated with Stainless.

MCP Server

Use the Turbopuffer MCP Server to enable AI assistants to interact with this API, allowing them to explore endpoints, make test requests, and use documentation to help integrate this SDK into your application.

Add to Cursor Install in VS Code

Note: You may need to set environment variables in your MCP client.

Documentation

The HTTP API documentation can be found at turbopuffer.com/docs.

Installation

# install from PyPI
pip install turbopuffer

Usage

import os
from turbopuffer import Turbopuffer

tpuf = Turbopuffer(
    # Pick the right region https://turbopuffer.com/docs/regions
    region="gcp-us-central1",
    # This is the default and can be omitted
    api_key=os.environ.get("TURBOPUFFER_API_KEY"),
)

ns = tpuf.namespace("example")

# Query nearest neighbors with a vector.
vector_result = ns.query(
    rank_by=("vector", "ANN", [0.1, 0.2]),
    top_k=10,
    filters=("And", (("name", "Eq", "foo"), ("public", "Eq", 1))),
    include_attributes=["name"],
)
print(vector_result.rows)
# [Row(id=1, vector=None, $dist=0.009067952632904053, name='foo')]

# Full-text search on an attribute.
fts_result = ns.query(
    top_k=10,
    filters=("name", "Eq", "foo"),
    rank_by=("text", "BM25", "quick walrus"),
)
print(fts_result.rows)
# [Row(id=1, vector=None, $dist=0.19, name='foo')]
# [Row(id=2, vector=None, $dist=0.168, name='foo')]

# See https://turbopuffer.com/docs/quickstart for more.

While you can provide an api_key keyword argument, we recommend using python-dotenv to add TURBOPUFFER_API_KEY="tpuf_A1..." to your .env file so that your API Key is not stored in source control.

Async usage

Simply import AsyncTurbopuffer instead of Turbopuffer and use await with each API call:

import os
import asyncio
from turbopuffer import AsyncTurbopuffer

tpuf = AsyncTurbopuffer(
    # Pick the right region https://turbopuffer.com/docs/regions
    region="gcp-us-central1",
    # This is the default and can be omitted
    api_key=os.environ.get("TURBOPUFFER_API_KEY"),
)

ns = tpuf.namespace("example")


async def main() -> None:
    # Query nearest neighbors with a vector.
    vector_result = await ns.query(
        rank_by=("vector", "ANN", [0.1, 0.2]),
        top_k=10,
        filters=("And", (("name", "Eq", "foo"), ("public", "Eq", 1))),
        include_attributes=["name"],
    )
    print(vector_result.rows)
    # [Row(id=1, vector=None, $dist=0.009067952632904053, name='foo')]

    # Full-text search on an attribute.
    fts_result = await ns.query(
        top_k=10,
        filters=("name", "Eq", "foo"),
        rank_by=("text", "BM25", "quick walrus"),
    )
    print(fts_result.rows)
    # [Row(id=1, vector=None, $dist=0.19, name='foo')]
    # [Row(id=2, vector=None, $dist=0.168, name='foo')]

    # See https://turbopuffer.com/docs/quickstart for more.


asyncio.run(main())

Functionality between the synchronous and asynchronous clients is otherwise identical.

With aiohttp

By default, the async client uses httpx for HTTP requests. However, for improved concurrency performance you may also use aiohttp as the HTTP backend.

You can enable this by installing aiohttp:

# install from PyPI
pip install turbopuffer[aiohttp]

Then you can enable it by instantiating the client with http_client=DefaultAioHttpClient():

import os
import asyncio
from turbopuffer import DefaultAioHttpClient
from turbopuffer import AsyncTurbopuffer


async def main() -> None:
    async with AsyncTurbopuffer(
        api_key=os.environ.get("TURBOPUFFER_API_KEY"),  # This is the default and can be omitted
        http_client=DefaultAioHttpClient(),
    ) as client:
        response = await client.namespaces.write(
            namespace="products",
            distance_metric="cosine_distance",
            upsert_rows=[
                {
                    "id": "2108ed60-6851-49a0-9016-8325434f3845",
                    "vector": [0.1, 0.2],
                }
            ],
        )
        print(response.rows_affected)


asyncio.run(main())

Using types

Nested request parameters are TypedDicts. Responses are Pydantic models which also provide helper methods for things like:

  • Serializing back into JSON, model.to_json()
  • Converting to a dictionary, model.to_dict()

Typed requests and responses provide autocomplete and documentation within your editor. If you would like to see type errors in VS Code to help catch bugs earlier, set python.analysis.typeCheckingMode to basic.

Pagination

List methods in the Turbopuffer API are paginated.

This library provides auto-paginating iterators with each list response, so you do not have to request successive pages manually:

from turbopuffer import Turbopuffer

client = Turbopuffer()

all_clients = []
# Automatically fetches more pages as needed.
for client in client.namespaces(
    prefix="products",
):
    # Do something with client here
    all_clients.append(client)
print(all_clients)

Or, asynchronously:

import asyncio
from turbopuffer import AsyncTurbopuffer

client = AsyncTurbopuffer()


async def main() -> None:
    all_clients = []
    # Iterate through items across all pages, issuing requests as needed.
    async for client in client.namespaces(
        prefix="products",
    ):
        all_clients.append(client)
    print(all_clients)


asyncio.run(main())

Alternatively, you can use the .has_next_page(), .next_page_info(), or .get_next_page() methods for more granular control working with pages:

first_page = await client.namespaces(
    prefix="products",
)
if first_page.has_next_page():
    print(f"will fetch next page using these details: {first_page.next_page_info()}")
    next_page = await first_page.get_next_page()
    print(f"number of items we just fetched: {len(next_page.namespaces)}")

# Remove `await` for non-async usage.

Or just work directly with the returned data:

first_page = await client.namespaces(
    prefix="products",
)

print(f"next page cursor: {first_page.next_cursor}")  # => "next page cursor: ..."
for client in first_page.namespaces:
    print(client.id)

# Remove `await` for non-async usage.

Nested params

Nested parameters are dictionaries, typed using TypedDict, for example:

from turbopuffer import Turbopuffer

client = Turbopuffer()

response = client.namespaces.write(
    namespace="namespace",
    encryption={},
)
print(response.encryption)

Handling errors

When the library is unable to connect to the API (for example, due to network connection problems or a timeout), a subclass of turbopuffer.APIConnectionError is raised.

When the API returns a non-success status code (that is, 4xx or 5xx response), a subclass of turbopuffer.APIStatusError is raised, containing status_code and response properties.

All errors inherit from turbopuffer.APIError.

import turbopuffer
from turbopuffer import Turbopuffer

client = Turbopuffer()

try:
    client.namespaces(
        prefix="foo",
    )
except turbopuffer.APIConnectionError as e:
    print("The server could not be reached")
    print(e.__cause__)  # an underlying Exception, likely raised within httpx.
except turbopuffer.RateLimitError as e:
    print("A 42