Changelog

See our latest feature releases, product improvements and bug fixes

Apr 3, 2024

Improved log filtering

You can now filter logs through the main text input. Just start typing the filter you’re looking for, like level , and autocomplete options will appear. Currently, logs filter by: Log level: zoom in...

Mar 26, 2024

Permit inference on unhealthy models

A model enters an “unhealthy” state when the deployment is active but there are runtime errors such as downtime on an external dependency. We now permit inference requests to proceed even when a...

Mar 21, 2024

Improve performance and reduce cost with fractional H100 GPUs

Baseten now offers model inference on NVIDIA H100mig GPUs, available for all customers starting at $0.08250/minute. The H100mig family of instances runs on a fractional share of an H100 GPU using...

Mar 20, 2024

Manage models with the Baseten REST API

We’re excited to share that we’ve created a REST API for managing Baseten models! Unlock powerful use cases outside of the (albeit amazing) Baseten UI - interact with your models programmatically,...

Mar 7, 2024

Configure model hardware with new resource selector

Every deployment of an ML model requires certain hardware resources — usually a GPU plus CPU cores and RAM — to run inference. We’ve made it easier to navigate the wide variety of hardware options...

Feb 23, 2024

View detailed billing and usage metrics

You can now view a daily breakdown of your model usage and billing information to get more insight into usage and costs. Here are the key changes: A new graph displays daily costs, requests, and...

Feb 6, 2024

Double inference speed and throughput with NVIDIA H100 GPUs

Baseten is now offering model inference on H100 GPUs starting at $9.984/hour. Switching to H100s offers a 18 to 45 percent improvement in price to performance vs equivalent A100 workloads using...

Jan 19, 2024

Deploy state-of-the-art open source models instantly

We’ve totally refreshed our model library to make it easier for you to find, evaluate, deploy, and build on state-of-the-art open source ML models. You can try the new model library for yourself...

Jan 11, 2024

NVIDIA L4 GPUs now generally available on Baseten

You can now deploy models to instances powered by the L4 GPU on Baseten. NVIDIA’s L4 GPU is an Ada Lovelace series GPU with: 121 teraFLOPS of float16 compute 24 GB of VRAM at a 300 GB/s memory...

Jan 8, 2024

Give names to model deployments

When deploying with Truss via truss push , you can now assign meaningful names to your deployments using the --deployment-name argument, making them easier to identify and manage. Here's an example:...

123…10