Skip to main content

SeaQuery just made writing raw SQL more enjoyable

ยท 8 min read
SeaQL Team
Chris Tsang
SeaQuery 1.0 Banner

You enjoy writing raw SQL queries, SeaQuery just made it better!

We've created a new raw_query! macro with neat features to make writing raw SQL queries more ergononmic.

The biggest headache when crafting complex queries is parameter binding, whether you use ? or $N assigning parameters manually is laborious and error-prone.

SeaQuery's new raw_query! macro is designed to solve this exact problem.

Gistโ€‹

let a = 1;
struct B { b: i32 }
let b = B { b: 2 };
let c = "A";
let d = vec![3, 4, 5];

let query = sea_query::raw_query!(
PostgresQueryBuilder,
r#"SELECT ("size_w" + {a}) * {b.b} FROM "glyph"
WHERE "image" LIKE {c} AND "id" IN ({..d})"#
);

assert_eq!(
query.sql,
r#"SELECT ("size_w" + $1) * $2 FROM "glyph"
WHERE "image" LIKE $3 AND "id" IN ($4, $5, $6)"#
);
assert_eq!(
query.values,
Values(vec![1.into(), 2.into(), "A".into(), 3.into(), 4.into(), 5.into()])
);

There are several features packed into the snippet above!

Let's have a quick overview and we'll dive into the details:

  1. named parameter: {a} injected
  2. nested parameter access: {b.b} inner access
  3. array expansion: {..d} expanded into three parameters

There are two more features that will be showcased later:

  1. tuple expansion: {values.0:2}
  2. repeating group: {..(values.0),}

Motivationโ€‹

While SeaQuery has long offered a way to build dynamic queries solving basically the same problem, not every one liked SeaQuery's code structure.

I recently came across this SQLx issue, and was enticed to solve it.

I am heavily inspired by Rust's format! macro and other languages' template literals for string interpolation, and figured it could probably be achieved in Rust as well!

Challengesโ€‹

First, the above cannot be implemented with standard macro_rules because of caller hygiene, such that expanded code cannot access the variables in the surrounding scope. But proc macros do not have this limitation.

Second, if we want to expand an array, the number of elements cannot be known until runtime. So a compile-time approach couldn't work.

Third, we have to support all of Rust's primitive and container types as well as third party data types.

Dive inโ€‹

Let's take a look at what the above code expands into!

let a = 1;
struct B {
b: i32,
}
let b = B { b: 2 };
let c = "A";
let d = <[_]>::into_vec(#[rustc_box] ::alloc::boxed::Box::new([3, 4, 5]));

let query = {
use sea_query::raw_sql::*;
let mut builder = RawSqlQueryBuilder::new(sea_query::PostgresQueryBuilder);
builder
.push_fragment("SELECT (\"size_w\" + ")
.push_parameters(1)
.push_fragment(") * ")
.push_parameters(1)
.push_fragment(" FROM \"glyph\"\n WHERE \"image\" LIKE ")
.push_parameters(1)
.push_fragment(" AND \"id\" IN (")
.push_parameters((&d).len())
.push_fragment(")");
let sql = builder.finish();
let mut query = seaql::query(&sql);
query = query.bind(&a);
query = query.bind(&b.b);
query = query.bind(&c);
for v in (&d).iter() {
query = query.bind(v);
}
query
};

I created RawSqlQueryBuilder that is somewhat similar to SeaQuery's query building backend. It serializes the query string in a single pass.

The derive macro first tokenize and parse the raw SQL to identify the splice points, and then call a special method push_parameters to push a variable number of parameters. This can be numbered, i.e. for Postgres.

After finishing the SQL building part, it then proceeds to bind the parameters. This mechanism is designed around SQLx's query API.

If the variable has a spread .. operator, we'd loop over it and bind all items.

Other ideasโ€‹

Originally I wanted to implement auto-expansion, meaning if the parameter is a container type, we'd expand it automagically. But there are quite a few special cases - most importantly Vec<u8> is used as bytes and should be bound as a single parameter. Moreover, in Postgres you can use arrays directly, and so we can't decide whether to expand.

More goodiesโ€‹

Tuple expansionโ€‹

There are two more features that'll make your life much easier.

let var = (1, "2".to_owned(), 3);

let query = sea_query::raw_query!(
PostgresQueryBuilder,
"SELECT {var.0}, {var.1}, {var.2}"
);

let new_query = sea_query::raw_query!(
PostgresQueryBuilder,
r#"SELECT {var.0:2}"#
);

assert_eq!(query, new_query);

We can already support accessing tuple members, why not offer a range operator?

The : token is chosen because it somewhat resembles the Python operator. [0:2] is un-natural because tuple members in Rust can only be accessed by .0. Feel free to offer your thoughts!

It's not possible to automatically expand a tuple like an array because its arity (the number of elements) is not known at the time the macro is expanded. If the tuple consists of elements with a uniform type, it can be made iterable like a vector by implementing the appropriate traits. However, that approach doesn't apply in the case above, where the tuple's structure is not uniform.

You can do inserts with this:

let values = (1, "2", 3);

let query = sea_query::raw_query!(
MysqlQueryBuilder,
"INSERT INTO `glyph` (`aspect`, `image`, `font_size`) VALUES ({values.0:2})"
);
assert_eq!(
query.sql,
"INSERT INTO `glyph` (`aspect`, `image`, `font_size`) VALUES (?, ?, ?)"
);
assert_eq!(query.values, Values(vec![1.into(), "2".into(), 3.into()]));

You may ask, then how do we insert multiple elements? Which brings us to the next feature:

Insert Manyโ€‹

When working with Rust and SQL, a common pain point is figuring out how to insert multiple rows efficiently. Writing raw SQL INSERT statements for each row is verbose, and while Postgres supports UNNEST for bulk inserts, its columnar API feels awkward and error-prone.

SeaQuery solves this by letting you build multi-row insert statements easily. With the raw_query! macro, you can pass in a vector of structs or tuples and generate a single INSERT INTO .. VALUES (..) , (..) , (..) query. This keeps your code clean, while still allowing you to do bulk insertions for efficiency. This solution is database agnostic so you can use it for MySQL and SQLite as well.

With Tuplesโ€‹

let values = vec![(1, "2", 3), (4, "5", 6)];

let query = sea_query::raw_query!(
PostgresQueryBuilder,
r#"INSERT INTO "glyph" ("aspect", "image", "font_size")
VALUES {..(values.0:2),}"#
);
assert_eq!(
query.sql,
r#"INSERT INTO "glyph" ("aspect", "image", "font_size")
VALUES ($1, $2, $3), ($4, $5, $6)"#
);

To achieve this we designed the "repeating group" syntax, inspired by regex. Please let me explain:

It's expanded upon the previous example, in which values.0:2 means tuple expansion. We want to repeat this tuple as a group, surrounded by parenthesis, so we wrap it with (). Then we apply the same spread operator .. to expand the vector of tuples. Finally, the trailing , means they should be connected with ,.

With structsโ€‹

struct Item {
a: i32,
b: String,
c: u16,
}

let values = vec![
Item { a: 1, b: "2".to_owned(), c: 3 },
Item { a: 4, b: "5".to_owned(), c: 6 },
];

let query = sea_query::raw_query!(
PostgresQueryBuilder,
r#"INSERT INTO "glyph" ("aspect", "image", "font_size")
VALUES {..(values.a, values.b, values.c),}"#
);

This is equivalent to the previous example, but uses named parameters.

SQLx Integrationโ€‹

SeaQuery offers tight SQLx integration. So in practice you can do (requires sqlx-utils feature flag):

let mut sql;
let res = sea_query::sqlx::sqlite::query!(
sql = r#"INSERT INTO "character"
("uuid", "font_size", "character")
VALUES {..(values.0:2),}"#
).execute(pool).await?;

Note the salient sql variable. SQLx's Query object can only borrow the SQL as &str, and so someone has to own the String. I couldn't think of a better API, suggestions welcome.

It calls the underlying Query's bind method directly, so no extra copy is involved. This is the lowest possible overhead!

One final example:

let mut character = Character { id: 1, font_size: 0 };
character.font_size = 18;

let res = sea_query::sqlx::sqlite::query!(
sql = r#"UPDATE "character"
SET "font_size" = {character.font_size}
WHERE "id" = {character.id}"#
).execute(pool).await?;

Full example can be found here.

It almost feels like a mini ORM ... although SeaORM ๐Ÿš is still highly recommended!

Lightweightnessโ€‹

SeaQuery without default features only has a handful of dependencies. But if you want to keep dependencies to an absolute minimum, you can depends on sea-query-derive directly.

$ cargo tree --no-default-features -e normal,build
sea-query-derive v1.0.0-rc.11 (proc-macro)
โ”œโ”€โ”€ proc-macro2 v1.0.94
โ”‚ โ””โ”€โ”€ unicode-ident v1.0.12
โ”œโ”€โ”€ quote v1.0.40
โ”‚ โ””โ”€โ”€ proc-macro2 v1.0.94 (*)
โ””โ”€โ”€ syn v2.0.100
โ”œโ”€โ”€ proc-macro2 v1.0.94 (*)
โ”œโ”€โ”€ quote v1.0.40 (*)
โ””โ”€โ”€ unicode-ident v1.0.12

Then you can use it like:

let sql;
let res: Vec<_> = sea_query_derive::raw_sql!(
sqlx::sqlite::query_as,
sql = "SELECT {var.0:2}"
).fetch_all(pool).await?;

SeaQuery 1.0โ€‹

This is just one of many new features we've added while preparing SeaQuery 1.0. This is currently an rc release, but we plan to finalize it soon. If you have ideas please join the discussion.

๐Ÿฆ€ Rustacean Sticker Packโ€‹

The Rustacean Sticker Pack is the perfect way to express your passion for Rust. Our stickers are made with a premium water-resistant vinyl with a unique matte finish.

Sticker Pack Contents:

  • Logo of SeaQL projects: SeaQL, SeaORM, SeaQuery, Seaography
  • Mascots: Ferris the Crab x 3, Terres the Hermit Crab
  • The Rustacean wordmark

Support SeaQL and get a Sticker Pack!

Rustacean Sticker Pack by SeaQL

What's new in SeaORM 1.1.12

ยท 12 min read
SeaQL Team
Chris Tsang
SeaORM 1.0 Banner

This blog post summarizes the new features and enhancements introduced in SeaORM 1.1:

New Featuresโ€‹

Implement DeriveValueType for enum stringsโ€‹

DeriveValueType now supports enum types. It offers a simpler alternative to DeriveActiveEnum for client-side enums backed by string database types.

#[derive(DeriveValueType)]
#[sea_orm(value_type = "String")]
pub enum Tag {
Hard,
Soft,
}

// `from_str` defaults to `std::str::FromStr::from_str`
impl std::str::FromStr for Tag {
type Err = sea_orm::sea_query::ValueTypeErr;
fn from_str(s: &str) -> Result<Self, Self::Err> { .. }
}

// `to_str` defaults to `std::string::ToString::to_string`.
impl std::fmt::Display for Tag {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result { .. }
}

The following trait impl are generated, removing the boilerplate previously needed:

DeriveValueType expansion
#[automatically_derived]
impl std::convert::From<Tag> for sea_orm::Value {
fn from(source: Tag) -> Self {
std::string::ToString::to_string(&source).into()
}
}

#[automatically_derived]
impl sea_orm::TryGetable for Tag {
fn try_get_by<I: sea_orm::ColIdx>(res: &sea_orm::QueryResult, idx: I)
-> std::result::Result<Self, sea_orm::TryGetError> {
let string = String::try_get_by(res, idx)?;
std::str::FromStr::from_str(&string).map_err(|err| sea_orm::TryGetError::DbErr(sea_orm::DbErr::Type(format!("{err:?}"))))
}
}

#[automatically_derived]
impl sea_orm::sea_query::ValueType for Tag {
fn try_from(v: sea_orm::Value) -> std::result::Result<Self, sea_orm::sea_query::ValueTypeErr> {
let string = <String as sea_orm::sea_query::ValueType>::try_from(v)?;
std::str::FromStr::from_str(&string).map_err(|_| sea_orm::sea_query::ValueTypeErr)
}

fn type_name() -> std::string::String {
stringify!(Tag).to_owned()
}

fn array_type() -> sea_orm::sea_query::ArrayType {
sea_orm::sea_query::ArrayType::String
}

fn column_type() -> sea_orm::sea_query::ColumnType {
sea_orm::sea_query::ColumnType::String(sea_orm::sea_query::StringLen::None)
}
}

#[automatically_derived]
impl sea_orm::sea_query::Nullable for Tag {
fn null() -> sea_orm::Value {
sea_orm::Value::String(None)
}
}

You can override from_str and to_str with custom functions, which is especially useful if you're using strum::Display and strum::EnumString, or manually implemented methods:

#[derive(DeriveValueType)]
#[sea_orm(
value_type = "String",
from_str = "Tag::from_str",
to_str = "Tag::to_str"
)]
pub enum Tag {
Color,
Grey,
}

impl Tag {
fn from_str(s: &str) -> Result<Self, ValueTypeErr> { .. }

fn to_str(&self) -> &'static str { .. }
}

Support Postgres IpNetworkโ€‹

#2395 (under feature flag with-ipnetwork)

// Model
#[derive(Clone, Debug, PartialEq, Eq, DeriveEntityModel)]
#[sea_orm(table_name = "host_network")]
pub struct Model {
#[sea_orm(primary_key)]
pub id: i32,
pub ipaddress: IpNetwork,
#[sea_orm(column_type = "Cidr")]
pub network: IpNetwork,
}

// Schema
sea_query::Table::create()
.table(host_network::Entity)
.col(ColumnDef::new(host_network::Column::Id).integer().not_null().auto_increment().primary_key())
.col(ColumnDef::new(host_network::Column::Ipaddress).inet().not_null())
.col(ColumnDef::new(host_network::Column::Network).cidr().not_null())
.to_owned();

// CRUD
host_network::ActiveModel {
ipaddress: Set(IpNetwork::new(Ipv6Addr::new(..))),
network: Set(IpNetwork::new(Ipv4Addr::new(..))),
..Default::default()
}

Added default_values to ActiveModelTraitโ€‹

The ActiveModel::default() returns ActiveModel { .. NotSet } by default (it can also be overridden).

We've added a new method default_values() which would set all fields to their actual Default::default() values.

This fills in a gap in the type system to help with serde. A real-world use case is to improve ActiveModel::from_json, an upcoming new feature (which is a breaking change, sadly).

#[derive(DeriveEntityModel)]
#[sea_orm(table_name = "fruit")]
pub struct Model {
#[sea_orm(primary_key)]
pub id: i32,
pub name: String,
pub cake_id: Option<i32>,
pub type_without_default: active_enums::Tea,
}

assert_eq!(
fruit::ActiveModel::default_values(),
fruit::ActiveModel {
id: Set(0), // integer
name: Set("".into()), // string
cake_id: Set(None), // option
type_without_default: NotSet, // not available
},
);

If you are interested in how this works under the hood, a new method Value::dummy_value is added in SeaQuery:

use sea_orm::sea_query::Value;
let v = Value::Int(None);
let n = v.dummy_value();
assert_eq!(n, Value::Int(Some(0)));

The real magic happens with a set of new traits, DefaultActiveValue, DefaultActiveValueNone and DefaultActiveValueNotSet, and taking advantage of Rust's autoref specialization mechanism used by anyhow:

use sea_orm::value::{DefaultActiveValue, DefaultActiveValueNone, DefaultActiveValueNotSet};

let v = (&ActiveValue::<i32>::NotSet).default_value();
assert_eq!(v, ActiveValue::Set(0));

let v = (&ActiveValue::<Option<i32>>::NotSet).default_value();
assert_eq!(v, ActiveValue::Set(None));

let v = (&ActiveValue::<String>::NotSet).default_value();
assert_eq!(v, ActiveValue::Set("".to_owned()));

let v = (&ActiveValue::<Option<String>>::NotSet).default_value();
assert_eq!(v, ActiveValue::Set(None));

let v = (&ActiveValue::<TimeDateTime>::NotSet).default_value();
assert!(matches!(v, ActiveValue::Set(_)));

This enables progressive enhancements based on the traits of the individual ActiveValue type.

Make sea-orm-cli & sea-orm-migration dependencies optionalโ€‹

#2367

Some engineering teams prefer vendoring sea-orm-cli into their own project as part of the cargo workspace, and so would like to have more control of the dependency graph. This change makes it possible to pick the exact features needed by your project.

Enhancementsโ€‹

  • Impl IntoCondition for RelationDef #2587
    This allows using RelationDef directly where the query API expects an IntoCondition
let query = Query::select()
.from(fruit::Entity)
.inner_join(cake::Entity, fruit::Relation::Cake.def())
.to_owned();

assert_eq!(
query.to_string(MysqlQueryBuilder),
r#"SELECT FROM `fruit` INNER JOIN `cake` ON `fruit`.`cake_id` = `cake`.`id`"#
);
assert_eq!(
query.to_string(PostgresQueryBuilder),
r#"SELECT FROM "fruit" INNER JOIN "cake" ON "fruit"."cake_id" = "cake"."id""#
);
assert_eq!(
query.to_string(SqliteQueryBuilder),
r#"SELECT FROM "fruit" INNER JOIN "cake" ON "fruit"."cake_id" = "cake"."id""#
);
  • Use fully-qualified syntax for ActiveEnum associated type #2552
  • Added try_getable_postgres_array!(Vec<u8>) (to support bytea[]) #2503
  • Accept LikeExpr in like and not_like #2549
  • Loader: retain only unique key values in the query condition #2569
  • Add proxy transaction impl #2573
  • Relax TransactionError's trait bound for errors to allow anyhow::Error #2602
  • [sea-orm-cli] Fix PgVector codegen #2589
  • [sea-orm-cli] Support postgres array in expanded format #2545

Bug fixesโ€‹

  • Quote type properly in AsEnum casting #2570
assert_eq!(
lunch_set::Entity::find()
.select_only()
.column(lunch_set::Column::Tea)
.build(DbBackend::Postgres)
.to_string(),
r#"SELECT CAST("lunch_set"."tea" AS "text") FROM "lunch_set""#
// "text" is now quoted; will work for "text"[] as well
);
  • Include custom column_name in DeriveColumn Column::from_str impl #2603
#[derive(DeriveEntityModel)]
pub struct Model {
#[sea_orm(column_name = "lAsTnAmE")]
last_name: String,
}

assert!(matches!(Column::from_str("lAsTnAmE").unwrap(), Column::LastName));
  • Check if url is well-formed before parsing (avoid panic) #2558
let db = Database::connect("postgre://sea:sea@localhost/bakery").await?;
// note the missing `s`; results in `DbErr::Conn`
  • QuerySelect::column_as method cast ActiveEnum column #2551
#[derive(Debug, FromQueryResult, PartialEq)]
struct SelectResult {
tea_alias: Option<Tea>,
}

assert_eq!(
SelectResult {
tea_alias: Some(Tea::EverydayTea),
},
Entity::find()
.select_only()
.column_as(Column::Tea, "tea_alias")
.into_model()
.one(db)
.await?
.unwrap()
);
  • Fix unicode string enum #2218
#[derive(Debug, Clone, PartialEq, Eq, EnumIter, DeriveActiveEnum)]
#[sea_orm(rs_type = "String", db_type = "String(StringLen::None)")]
pub enum SeaORM {
#[sea_orm(string_value = "์”จ์˜ค์•Œ์— ")]
์”จ์˜ค์•Œ์— ,
}

Upgradesโ€‹

  • Upgrade sqlx to 0.8.4 #2562
  • Upgrade sea-query to 0.32.5
  • Upgrade sea-schema to 0.16.2
  • Upgrade heck to 0.5 #2218

House Keepingโ€‹

  • Replace once_cell crate with std equivalent #2524 (available since rust 1.80)

Release Planningโ€‹

SeaORM 1.0 is a stable release. As demonstrated, we are able to ship many new features without breaking the API. The 1.1 version will be maintained until October 2025, and we'll likely release a 1.2 version with some breaking changes afterwards.

The underlying library SeaQuery will undergo an overhaul and be promoted to 1.0.

If you have suggestions on breaking changes, you are welcome to post them in the discussions:

SQL Server Supportโ€‹

We've been beta-testing SQL Server for SeaORM for a while. SeaORM X offers the same SeaORM API for MSSQL. We ported all test cases and most examples, complemented by MSSQL specific documentation. If you are building enterprise software for your company, you can request commercial access.

Features:

  • SeaQuery + SeaSchema
  • Entity generation with sea-orm-cli
  • GraphQL with Seaography
  • Nested transaction, connection pooling and multi-async runtime

๐Ÿ–ฅ๏ธ SeaORM Pro: Professional Admin Panelโ€‹

SeaORM Pro is an admin panel solution allowing you to quickly and easily launch an admin panel for your application - frontend development skills not required, but certainly nice to have!

Features:

  • Full CRUD
  • Built on React + GraphQL
  • Built-in GraphQL resolver
  • Customize the UI with simple TOML

Getting Startedโ€‹

Latest featuresโ€‹

The latest release of SeaORM Pro has a new feature, Model Editor. Instead of updating data in a pop-up dialog, editor offers a dedicated page with deep link to view and update data. It also offers more control to the UI layout.

If you feel generous, a small donation will be greatly appreciated, and goes a long way towards sustaining the organization.

A big shout out to our GitHub sponsors ๐Ÿ˜‡:

Gold Sponsorsโ€‹

QDX pioneers quantum dynamicsโ€“powered drug discovery, leveraging AI and supercomputing to accelerate molecular modeling. We're grateful to QDX for sponsoring the development of SeaORM, the SQL toolkit that powers their data intensive applications.

GitHub Sponsorsโ€‹

Numeus

Caido

Data Intuitive

Marcus Buffett

MS

wh7f


Afonso Barracha

Dean Sheather

KallyDev

Manfred Lee

MasakiMiyazaki

What's new in SeaORM 1.1.7

ยท 14 min read
SeaQL Team
Chris Tsang
SeaORM 1.0 Banner

This blog post summarizes the new features and enhancements introduced in SeaORM 1.1:

New Featuresโ€‹

Support Postgres Vectorโ€‹

#2500 The popular pgvector extension enables efficient storage and querying of high-dimensional vector data, supporting applications like similarity search, recommendation systems, and other AI tools.

Thanks to the contribution of @28Smiles, PgVector is now integrated nicely into the SeaQL ecosystem (under feature flag postgres-vector).

// Model
#[derive(Clone, Debug, PartialEq, DeriveEntityModel)]
#[sea_orm(table_name = "image_model")]
pub struct Model {
#[sea_orm(primary_key)]
pub id: i32,
pub embedding: PgVector,
}

// Schema
sea_query::Table::create()
.table(image_model::Entity.table_ref())
.col(ColumnDef::new(Column::Id).integer().not_null().auto_increment().primary_key())
.col(ColumnDef::new(Column::Embedding).vector(None).not_null())
..

// Insert
ActiveModel {
id: NotSet,
embedding: Set(PgVector::from(vec![1., 2., 3.])),
}
.insert(db)
.await?

Nested Objects in Relational Queriesโ€‹

#2508 #2179 #1716

We now have a good answer to Why SeaORM does not nest objects for parent-child relation!

The latest improvements to the FromQueryResult and DerivePartialModel macros allows you to nest objects easily, simplifying the construction of complex queries.

To illustrate, let's take a look at the Bakery Schema again.

Bakery ERD

As a simple first example, we'd like to select Cake with Bakery:

#[derive(FromQueryResult)]
struct Cake {
id: i32,
name: String,
#[sea_orm(nested)]
bakery: Option<Bakery>,
}

#[derive(FromQueryResult)]
struct Bakery {
#[sea_orm(from_alias = "bakery_id")]
id: i32,
#[sea_orm(from_alias = "bakery_name")]
brand: String,
}

let cake: Cake = cake::Entity::find()
.select_only()
.column(cake::Column::Id)
.column(cake::Column::Name)
.column_as(bakery::Column::Id, "bakery_id")
.column_as(bakery::Column::Name, "bakery_name")
.left_join(bakery::Entity)
.order_by_asc(cake::Column::Id)
.into_model()
.one(db)
.await?
.unwrap();

assert_eq!(
cake,
Cake {
id: 1,
name: "Basque cheesecake".to_string(),
bakery: Some(Bakery {
id: 20,
brand: "Super Baker".to_string(),
})
}
);

Because the tables cake and bakery have some duplicate column names, we'd have to do custom selects. select_only here clears the default select list, and we apply aliases with column_as. Then, in FromQueryResult we use from_alias to map the query result back to the nested struct.

You may wonder if there are ways to not do the alias and mapping? Yes! There's where DerivePartialModel comes into play. The previous example can be written as:

#[derive(DerivePartialModel)] // FromQueryResult is no longer needed
#[sea_orm(entity = "cake::Entity", from_query_result)]
struct Cake {
id: i32,
name: String,
#[sea_orm(nested)]
bakery: Option<Bakery>,
}

#[derive(DerivePartialModel)]
#[sea_orm(entity = "bakery::Entity", from_query_result)]
struct Bakery {
id: i32,
#[sea_orm(from_col = "Name")]
brand: String,
}

// same as previous example, but without the custom selects
let cake: Cake = cake::Entity::find()
.left_join(bakery::Entity)
.order_by_asc(cake::Column::Id)
.into_partial_model()
.one(db)
.await?
.unwrap();

Under the hood, bakery_ prefix will be added to the column alias in the SQL query.

SELECT
"cake"."id" AS "id",
"cake"."name" AS "name",
"bakery"."id" AS "bakery_id",
"bakery"."name" AS "bakery_brand"
FROM "cake"
LEFT JOIN "bakery" ON "cake"."bakery_id" = "bakery"."id"
ORDER BY "cake"."id" ASC LIMIT 1

Now, let's look at one more advanced three-way join. Our join tree starts from Order:

Order -> Customer
-> LineItem -> Cake
#[derive(Debug, DerivePartialModel, PartialEq)]
#[sea_orm(entity = "order::Entity", from_query_result)]
struct Order {
id: i32,
total: Decimal,
#[sea_orm(nested)]
customer: Customer,
#[sea_orm(nested)]
line: LineItem,
}

#[derive(Debug, DerivePartialModel, PartialEq)]
#[sea_orm(entity = "customer::Entity", from_query_result)]
struct Customer {
name: String,
}

#[derive(Debug, DerivePartialModel, PartialEq)]
#[sea_orm(entity = "lineitem::Entity", from_query_result)]
struct LineItem {
price: Decimal,
quantity: i32,
#[sea_orm(nested)]
cake: Cake,
}

#[derive(Debug, DerivePartialModel, PartialEq)]
#[sea_orm(entity = "cake::Entity", from_query_result)]
struct Cake {
name: String,
}

let items: Vec<Order> = order::Entity::find()
.left_join(customer::Entity)
.left_join(lineitem::Entity)
.join(JoinType::LeftJoin, lineitem::Relation::Cake.def())
.order_by_asc(order::Column::Id)
.order_by_asc(lineitem::Column::Id)
.into_partial_model()
.all(db)
.await?;

assert_eq!(
items,
[
Order {
id: 101,
total: Decimal::from(10),
customer: Customer {
name: "Bob".to_owned()
},
line: LineItem {
cake: Cake {
name: "Cheesecake".to_owned()
},
price: Decimal::from(2),
quantity: 2,
}
},
..
]
);

That's it! Hope you like these new features, and a huge thanks to @Goodjooy for laying the foundation, @jreppnow for implementing the nested logic, and everyone who participated in the discussion.

Bonus: PartialModel -> ActiveModelโ€‹

#2517 DerivePartialModel got another extension to derive IntoActiveModel as well. Absent attributes will be filled with NotSet. This allows you to have a cake and eat it!

#[derive(DerivePartialModel)]
#[sea_orm(entity = "cake::Entity", into_active_model)]
struct PartialCake {
id: i32,
name: String,
}

let partial_cake = PartialCake {
id: 12,
name: "Lemon Drizzle".to_owned(),
};

// this is now possible:
assert_eq!(
cake::ActiveModel {
..partial_cake.into_active_model()
},
cake::ActiveModel {
id: Set(12),
name: Set("Lemon Drizzle".to_owned()),
..Default::default()
}
);

Three way selectโ€‹

#2518 With PartialModel being so powerful, if you still need to do non-nested selects, there's SelectThree, an extension to SelectTwo:

Order -> Lineitem -> Cake
let items: Vec<(order::Model, Option<lineitem::Model>, Option<cake::Model>)> =
order::Entity::find()
.find_also_related(lineitem::Entity)
.and_also_related(cake::Entity)
.order_by_asc(order::Column::Id)
.order_by_asc(lineitem::Column::Id)
.all(db)
.await?;

Insert heterogeneous modelsโ€‹

#2433 Insert many now allows active models to have different column sets (it previously panics). Missing columns will be filled with NULL. This makes seeding data (e.g. with Loco) a seamless operation.

let apple = cake_filling::ActiveModel {
cake_id: ActiveValue::set(2),
filling_id: ActiveValue::NotSet,
};
let orange = cake_filling::ActiveModel {
cake_id: ActiveValue::NotSet,
filling_id: ActiveValue::set(3),
};
assert_eq!(
Insert::<cake_filling::ActiveModel>::new()
.add_many([apple, orange])
.build(DbBackend::Postgres)
.to_string(),
r#"INSERT INTO "cake_filling" ("cake_id", "filling_id") VALUES (2, NULL), (NULL, 3)"#,
);

Improved Seaography Integrationโ€‹

#2403 We've simplified the code by allowing you to register entities into Seaography's GraphQL schema directly within the entity module.

pub mod prelude;

pub mod sea_orm_active_enums;

pub mod baker;
pub mod bakery;
pub mod cake;
pub mod cakes_bakers;

seaography::register_entity_modules!([
baker,
bakery,
cake,
cakes_bakers,
]);

seaography::register_active_enums!([
sea_orm_active_enums::Tea,
]);

Enhancementsโ€‹

assert_eq!(
Entity::insert_many([
ActiveModel { id: NotSet, name: Set("two".into()) },
ActiveModel { id: NotSet, name: Set("three".into()) },
])
.exec_with_returning_many(db)
.await
.unwrap(),
[
Model { id: 2, name: "two".into() },
Model { id: 3, name: "three".into() },
]
);

assert_eq!(
cakes_bakers::Entity::insert_many([
cakes_bakers::ActiveModel {
cake_id: Set(1),
baker_id: Set(2),
},
cakes_bakers::ActiveModel {
cake_id: Set(2),
baker_id: Set(1),
},
])
.exec_with_returning_keys(db)
.await
.unwrap(),
[(1, 2), (2, 1)]
);
  • Added DeleteOne::exec_with_returning & DeleteMany::exec_with_returning #2432
  • Support complex type path in DeriveIntoActiveModel #2517
#[derive(DeriveIntoActiveModel)]
#[sea_orm(active_model = "<fruit::Entity as EntityTrait>::ActiveModel")]
struct Fruit {
cake_id: Option<Option<i32>>,
}
  • Added DatabaseConnection::close_by_ref #2511
pub async fn close(self) -> Result<(), DbErr> { .. } // existing
pub async fn close_by_ref(&self) -> Result<(), DbErr> { .. } // new
  • Added Schema::json_schema_from_entity to construct schema metadata for Entities
assert_eq!(
Schema::new(DbBackend::MySql).json_schema_from_entity(lunch_set::Entity),
json! {
"columns": [
{
"name": "id",
"nullable": false,
"type": "integer"
},
{
"name": "name",
"nullable": false,
"type": "string"
},
{
"name": "tea",
"nullable": false,
"type": {
"name": "tea",
"variants": [
"EverydayTea",
"BreakfastTea"
]
}
}
],
"primary_key": [
"id"
]
}
);
  • Construct DatabaseConnection directly from sqlx::PgPool, sqlx::SqlitePool and sqlx::MySqlPool #2348
// these are implemented:
impl From<MySqlPool> for SqlxMySqlPoolConnection { .. }
impl From<MySqlPool> for DatabaseConnection { .. }

// so this is now possible:
let db: DatabaseConnection = mysql_pool.into();
  • Expose underlying row types (e.g. sqlx::postgres::PgRow) #2265
  • [sea-orm-migration] Allow modifying the connection in migrations #2397
#[async_std::main]
async fn main() {
cli::run_cli_with_connection(migration::Migrator, |connect_options| async {
let db = Database::connect(connect_options).await?;
if db.get_database_backend() == DatabaseBackend::Sqlite {
register_sqlite_functions(&db).await;
}
Ok(db)
}).await;
}
  • [sea-orm-cli] Added MIGRATION_DIR environment variable #2419
  • [sea-orm-cli] Added acquire-timeout option #2461
  • [sea-orm-cli] Added impl-active-model-behavior option #2487
  • [sea-orm-cli] Added with-prelude option #2322
    • all: the default value (current behaviour), will generate prelude.rs and add it to mod.rs / lib.rs
    • all-allow-unused-imports: will generate prelude.rs and add it to mod.rs, plus adding #![allow(unused_imports)] in the module
    • none: will not generate prelude.rs

Upgradesโ€‹

  • Upgrade sqlx to 0.8 #2305 #2371
  • Upgrade bigdecimal to 0.4 #2305
  • Upgrade sea-query to 0.32.0 #2305
  • Upgrade sea-query-binder to 0.7 #2305
  • Upgrade sea-schema to 0.16 #2305
  • Upgrade ouroboros to 0.18 #2353

House Keepingโ€‹

  • Cleanup legacy ActiveValue::Set #2515
  • Remove futures crate, replace with futures-util #2466

Release Planningโ€‹

SeaORM 1.0 is a stable release. As demonstrated, we are able to ship many new features without breaking the API. The 1.x version will be updated until at least October 2025, and we'll decide whether to release a 2.0 version or extend the 1.x life cycle.

SQL Server Supportโ€‹

We've been beta-testing SQL Server for SeaORM for a while. If you are building software for your company, please request early access.

If you feel generous, a small donation will be greatly appreciated, and goes a long way towards sustaining the organization.

A big shout out to our GitHub sponsors ๐Ÿ˜‡:

Numeus

Caido

Marcus Buffett


Maxwell Koo

Spencer Ferris

Dean Sheather

Naoki Ikeguchi

Data Intuitive

Marlon Mueller

MasakiMiyazaki

Manfred Lee

KallyDev

Afonso Barracha

gnuphie

Wait... there's one more thingโ€‹

"Are we web yet?" is a recurring question in the Rust community, the answer is yes, yes, YES!

If you are looking for a batteries-included full-stack web development framework that is strongly-typed, asynchronous, robust and high-performance, look no further than Rust + Loco + SeaQL. We highly recommend giving Loco a try - "Itโ€™s Like Ruby on Rails, but for Rust."

With this final piece of software, my vision for a complete full-stack Rust environment is now realized. After years of development in SeaORM + Seaography, I am excited to introduce it to you:

๐Ÿ–ฅ๏ธ SeaORM Pro: Professional Admin Panelโ€‹

SeaORM Pro is an admin panel solution allowing you to quickly and easily launch an admin panel for your application - frontend development skills not required, but certainly nice to have!

Features:

  • Full CRUD
  • Built on React + GraphQL
  • Built-in GraphQL resolver
  • Customize the UI with simple TOML

Learn More

Tutorial: Modeling Inheritance in SeaORM

ยท 7 min read
SeaQL Team
Chris Tsang

Introductionโ€‹

This tutorial walks you through the design and implementation of a REST API endpoint that involves some complex relational queries.

The API looks like this:

POST /api/v1/complex-products

Parameters (JSON body):

FieldTypeDescription
idint[]Get products with these ids
namestringSearch products with name matching this
typeenum of ProductTypeLimit products to this type

Return example:

{
"id": 1,
"name": "Mountain Bike V2",
"type": "Bike",
"price": "2500.0",
"lot_size": "1.0",
"date_added": "2020-01-01T00:00:00",
"last_modified": "2025-01-02T12:18:54",
"history": [
{
"from": "2020-01-01T00:00:00",
"until": "2022-01-01T00:00:00",
"name": "Mountain Bike V1"
}
]
}

Schemaโ€‹

Imagine we run a store with many types of products. Each product type has its own attributes and we want to factor out the common product attributes to a base class.

In OOP terms:

struct BaseProduct {
r#type: ProductType,
..
}

// made up syntax, but it means inheritance
struct ComplexProduct: BaseProduct { .. }

enum ProductType { .. }

In SQL terms, we have 3 entities and 2 relations among them:

  • BaseProduct -> ComplexProduct
  • BaseProduct -> ProductType

Below are the SeaORM Entities:

BaseProductโ€‹

#[derive(Clone, Debug, PartialEq, DeriveEntityModel, Eq)]
#[sea_orm(table_name = "base_product")]
pub struct Model {
#[sea_orm(primary_key)]
pub id: i64,
#[sea_orm(unique)]
pub name: String,
pub type_id: i32, // linking to product_type
}

#[derive(Copy, Clone, Debug, EnumIter, DeriveRelation)]
pub enum Relation {
#[sea_orm(has_one = "super::complex_product::Entity")]
ComplexProduct,
#[sea_orm(has_many = "super::product_history::Entity")]
ProductHistory,
#[sea_orm(
belongs_to = "super::product_type::Entity",
from = "Column::TypeId",
to = "super::product_type::Column::Id",
on_update = "NoAction",
on_delete = "NoAction"
)]
ProductType,
}

ComplexProductโ€‹

#[derive(Clone, Debug, PartialEq, DeriveEntityModel, Eq)]
#[sea_orm(table_name = "complex_product")]
pub struct Model {
#[sea_orm(primary_key, auto_increment = false)]
pub product_id: i64, // linking to base_product
#[sea_orm(column_type = "Decimal(Some((30, 15)))", nullable)]
pub price: Option<Decimal>,
#[sea_orm(column_type = "Decimal(Some((30, 15)))", nullable)]
pub lot_size: Option<Decimal>,
pub date_added: DateTime,
pub last_modified: DateTime,
}

#[derive(Copy, Clone, Debug, EnumIter, DeriveRelation)]
pub enum Relation {
#[sea_orm(
belongs_to = "super::base_product::Entity",
from = "Column::ProductId",
to = "super::base_product::Column::Id",
on_update = "NoAction",
on_delete = "Cascade"
)]
BaseProduct,
}

ProductTypeโ€‹

Basically an 'enum table'.

#[derive(Clone, Debug, PartialEq, DeriveEntityModel, Eq)]
#[sea_orm(table_name = "product_type")]
pub struct Model {
#[sea_orm(primary_key)]
pub id: i32,
#[sea_orm(unique)]
pub name: String,
}

#[derive(Copy, Clone, Debug, EnumIter, DeriveRelation)]
pub enum Relation {
#[sea_orm(has_many = "super::base_product::Entity")]
BaseProduct,
}

1. Define result data structureโ€‹

#[derive(Clone, Debug, PartialEq, Eq, FromQueryResult, Serialize)]
pub struct ComplexProduct {
pub id: i64,
pub name: String,
pub r#type: String,
pub price: Decimal,
pub lot_size: Decimal,
pub date_added: DateTime,
pub last_modified: DateTime,
#[sea_orm(skip)]
pub history: Vec<product_history::Model>,
}

With Serialize, you can transform the select result into JSON directly.

2. Define helper aliasesโ€‹

#[derive(DeriveIden, Clone, Copy)]
pub struct Id;

#[derive(DeriveIden, Clone, Copy)]
pub struct Name;

#[derive(DeriveIden, Clone, Copy)]
pub struct Base;

use complex_product::Entity as Prod;
pub type ProdCol = <Prod as EntityTrait>::Column;
type ProdRel = <Prod as EntityTrait>::Relation;

This would make our code much more concise and readable.

Avoid using Alias::new because it's error-prone and slightly more expensive.

3. Custom selectsโ€‹

fn query() -> Select<complex_product::Entity> {
complex_product::Entity::find()
.select_only()
.tbl_col_as((Base, Id), "id")
.tbl_col_as((Base, Name), "name")
.column_as(product_type::Column::Name, "type")
.column_as(ProdCol::Price, "price")
.column_as(ProdCol::LotSize, "lot_size")
.column_as(ProdCol::DateAdded, "date_added")
.column_as(ProdCol::LastModified, "last_modified")
.join_as(JoinType::InnerJoin, ProdRel::BaseProduct.def(), Base)
.join(JoinType::InnerJoin, base_product::Relation::ProductType.def().from_alias(Base))
.order_by_asc(Expr::col((Base, Id)))
}

Our query starts from ComplexProduct. We join back to BaseProduct, alias it as Base. We then join to ProductType via Base:

ComplexProduct -> BaseProduct as Base -> ProductType

column_as automatically prefix the column with the table name. from_alias is doing the magic here, allowing us to reuse the existing Relation by overwriting the left hand side of the on condition.

You can use the join method to construct complex joins in select queries. It takes any RelationDef, and you can further customize the join conditions. You can find more examples here.

4. Filter Conditionsโ€‹

Let's define struct for query parameters. Again, using serde here so it can be deserialized from JSON.

#[derive(Default, Deserialize)]
pub struct Query {
#[serde(default)]
pub id: Vec<i64>, // if unspecified, will be an empty vec
pub name: Option<String>,
pub r#type: Option<String>,
}

Then, we transform the parameters into SQL where conditions:

fn condition(query: Query) -> Condition {
Condition::all()
.add_option(if !query.id.is_empty() {
Some(Expr::col((Base, Id)).is_in(query.id))
} else { None })
.add_option(if let Some(name) = &query.name {
Some(Expr::col((Base, Name)).like(name))
} else { None })
.add_option(if let Some(r#type) = &query.r#type {
Some(product_type::Column::Name.eq(r#type))
} else { None })
}

Bonus tip: if you're only using Postgres you can replace is_in with any:

use sea_orm::sea_query::extension::postgres::PgFunc;

Expr::col((Base, Id)).eq(PgFunc::any(query.id)) // WHERE base.id = ANY($N)

Combining the above functions, here is how we implement the API endpoint:

pub async fn query_products(db: DbConn, q: Query)
-> Result<Vec<ComplexProduct>, DbErr>
{
query()
.filter(condition(q))
.into_model::<ComplexProduct>()
.all(&db)
.await
}

5. Associated modelsโ€‹

Suppose we have a data structure associated with each BaseProduct recording its history.

ProductHistoryโ€‹

#[derive(Clone, Debug, PartialEq, DeriveEntityModel, Eq, Serialize)]
#[sea_orm(table_name = "product_history")]
pub struct Model {
#[sea_orm(primary_key)]
#[serde(skip)]
pub id: i32,
#[serde(skip)]
pub product_id: i64,
pub from: DateTime,
pub until: DateTime,
pub name: Option<String>,
}

#[derive(Copy, Clone, Debug, EnumIter, DeriveRelation)]
pub enum Relation {
#[sea_orm(
belongs_to = "super::base_product::Entity",
from = "Column::ProductId",
to = "super::base_product::Column::Id",
on_update = "NoAction",
on_delete = "Cascade"
)]
BaseProduct,
}

Let's make a helper function to query the histories associated to a set of products:

fn history_of(ids: Vec<i64>) -> Select<product_history::Entity> {
product_history::Entity::find()
.filter(Expr::col(product_history::Column::ProductId).is_in(ids))
.order_by_asc(product_history::Column::Id)
}

let histories = history_of(products.iter().map(|s| s.id).collect::<Vec<_>>())
.all(&db)
.await?;

The final step is to associate product_history::Model to ComplexProduct:

// parent should be unique and already ordered by id.
fn associate(
mut parent: Vec<ComplexProduct>,
children: Vec<product_history::Model>,
) -> Vec<ComplexProduct> {
let len = parent.len();
parent.dedup_by_key(|s| s.id);
if len != parent.len() {
warn!("parent is not unique.");
}

let parent_id_map: HashMap<i64, usize> = parent
.iter()
.enumerate()
.map(|(i, s)| (s.id, i))
.collect();

// put children into associated parent
for item in children {
if let Some(index) = parent_id_map.get(&item.product_id) {
parent[*index].history.push(item);
}
}

parent
}

let products = associate(products, histories);

This is sometimes called "data loader" pattern, and can be generalized with generics to work with any models.

Conclusionโ€‹

SeaORM's type system encourages you to write modular and reusable code, embracing the "Don't repeat yourself" principle.

You define the Entities and Relations once.

You define the aliases and query helpers once.

You can pass the Select<T> and Condition around.

You then assemble these pieces together to implement any complex API!

๐Ÿฆ€ Rustacean Sticker Packโ€‹

The Rustacean Sticker Pack is the perfect way to express your passion for Rust. Our stickers are made with a premium water-resistant vinyl with a unique matte finish. Stick them on your laptop, notebook, or any gadget to show off your love for Rust!

Moreover, all proceeds contributes directly to the ongoing development of SeaQL projects.

Sticker Pack Contents:

  • Logo of SeaQL projects: SeaQL, SeaORM, SeaQuery, Seaography, FireDBG
  • Mascot of SeaQL: Terres the Hermit Crab
  • Mascot of Rust: Ferris the Crab
  • The Rustacean word

Support SeaQL and get a Sticker Pack!

Rustacean Sticker Pack by SeaQL

SeaQL Community Survey 2024 Results

ยท 5 min read
Billy Chan
SeaQL Team

695 members of the SeaQL community from 47 countries kindly contributed their thoughts on using SeaQL libraries, learning Rust and employing Rust in their day to day development lives. From these responses we hope to get an understanding of where the SeaQL and Rust community stands in 2024.

This is our first community survey, we will conduct the survey annually to keep track of how the community evolves over time.

Developer Profileโ€‹

Q. Where are you located in?โ€‹

Participants are from 47 countries across the world!

Other: ArmeniaAzerbaijanBangladeshCambodiaColombia CroatiaCzech RepublicDenmarkEgyptFinlandGuatemalaIsraelItalyKazakhstanKenyaNepalRomaniaSaudi ArabiaSlovakiaSloveniaSouth AfricaSpainSwedenSyriaTurkey

Q. Including any education, how many years have you been coding in total?โ€‹

Q. Which of the following describes your current job, the one you do most of the time?โ€‹

Other: Academic researcherMarketing or sales professionalEngineer, dataData or business analystSolution ArchitectDeveloper, QA or testDesignerDeveloper, game or graphicsHobbyist DeveloperEducator

Using SeaQL Libraries in Building a Projectโ€‹

Q. Roughly how long have you been using SeaQL libraries?โ€‹

Q. Which SeaQL libraries are you using?โ€‹

Q. Are you using SeaQL libraries in a personal, academic or professional context?โ€‹

Q. How many team members (including you) are working on the project?โ€‹

Q. Can you categorize the nature of the project?โ€‹

Other: Backend for a Web 3.0 ApplicationIoT ApplicationBackend SystemA Document GeneratorMobile ProjectSaaSDatabase for a Cybersecurity SoftwareAPIๅญฆไน ้กน็›ฎBackend for a Multiplatform Application (Using Tauri)Frontend + Backend Work

Q. What is your development environment?โ€‹

Linux Breakdownโ€‹

Windows Breakdownโ€‹

macOS Breakdownโ€‹

Q. Which database(s) do you use?โ€‹

Other: SurrealDBEdgeDBMariaDBMariaDB Galera ClusterMongoDBlibSQLMSSQLTImescaleYugabyteOracle Database

Q. Which web framework are you using?โ€‹

Other: SolidJSTauriTonictrilliumVueWarp

Q. What is the deployment environment?โ€‹

Other: DigitalOcean App PlatformDogYunFly.ioOracle Cloud

Using Rust at Workโ€‹

Q. Are you using Rust at work?โ€‹

Q. Which industry your company is in?โ€‹

Other: Space Situational AwarenessSecurityAdsBettingConstructionAutomotiveLast Mile LogisticsIoT

Q. How many engineers in your company are dedicated to writing Rust?โ€‹

Q. What is the size of your company?โ€‹

Q. Which layer(s) of the technology stack are using Rust?โ€‹

Other: EmbeddedHardware Drivers

Q. Would your company be interested in using SeaQL libraries with other databases?โ€‹

Learning Rustโ€‹

Q. Are you learning / new to Rust?โ€‹

Q. Which language(s) are you most familiar with?โ€‹

Other: ErlangCobolLuaPerlRPGAssembly

Q. Do you find Rust easy or hard to learn?โ€‹

Q. What motivates you to learn Rust?โ€‹

Other: School makes me to learnBetter option to learn than C/C++it's fantasticStability of ecosystem, security, speedThe ease of writing clean code

Q. What learning resources do you rely on?โ€‹

Q. What is your first project built using Rust?โ€‹

Q. Are you familiar with SQL?โ€‹

About SeaQL Librariesโ€‹

Q. Why did you choose SeaQL libraries?โ€‹

Other: sea-orm-cli codegenErgonomic APIWord of mouth

Q. Which aspects do you want to see advancement on SeaORM?โ€‹

Other: Better macros for less boilerplate codeAutomatic migrationSchema diffing for migrations

Q. What tools would you be interested in using, if developed first-party by SeaQL?โ€‹

Share Your Thoughtsโ€‹

Q. Anything else you want to say?โ€‹

Didn't expect this section to turn into a testimonial, thank you for all the kind words :)

Love your toolchain, it's awesome to work with!

Wonderful library! To provide some additional feedback: I found myself missing documentation in certain parts of the API, leading to me having to guess what the method actually does for a given database backend (namely in relation to query builders).

Schema/migration management would be really nice. I have used Django in the past and loved the automatic migration file generation.

SeaORM is great - it just replaces existing asp.net entity framework projects at our company.

๐Ÿฆ€ Rustacean Sticker Packโ€‹

The Rustacean Sticker Pack is the perfect way to express your passion for Rust. Our stickers are made with a premium water-resistant vinyl with a unique matte finish. Stick them on your laptop, notebook, or any gadget to show off your love for Rust!

Moreover, all proceeds contributes directly to the ongoing development of SeaQL projects.

Sticker Pack Contents:

  • Logo of SeaQL projects: SeaQL, SeaORM, SeaQuery, Seaography, FireDBG
  • Mascot of SeaQL: Terres the Hermit Crab
  • Mascot of Rust: Ferris the Crab
  • The Rustacean word

Support SeaQL and get a Sticker Pack!

Rustacean Sticker Pack by SeaQL

What's new in SeaQuery 0.32.x

ยท 7 min read

๐ŸŽ‰ We are pleased to release SeaQuery 0.32.0 / 0.32.1! Here are some feature highlights ๐ŸŒŸ:

New Featuresโ€‹

Unify Expr and SimpleExpr Methods with ExprTrait #791โ€‹

Previously, "operator" methods (e.g. add, eq) are duplicated across Expr and SimpleExpr, but the list of methods is slightly different for each. And since Expr and SimpleExpr are distinct types, it makes writing generic code difficult.

The ExprTrait looks like this:

pub trait ExprTrait: Sized {
// Required methods
fn as_enum<N>(self, type_name: N) -> SimpleExpr
where N: IntoIden;
fn binary<O, R>(self, op: O, right: R) -> SimpleExpr
where O: Into<BinOper>,
R: Into<SimpleExpr>;
fn cast_as<N>(self, type_name: N) -> SimpleExpr
where N: IntoIden;
fn unary(self, o: UnOper) -> SimpleExpr;

// Provided methods
fn add<R>(self, right: R) -> SimpleExpr where R: Into<SimpleExpr> { ... }
fn and<R>(self, right: R) -> SimpleExpr where R: Into<SimpleExpr> { ... }
fn between<A, B>(self, a: A, b: B) -> SimpleExpr
where A: Into<SimpleExpr>,
B: Into<SimpleExpr> { ... }
fn div<R>(self, right: R) -> SimpleExpr where R: Into<SimpleExpr> { ... }
fn eq<R>(self, right: R) -> SimpleExpr where R: Into<SimpleExpr> { ... }
fn equals<C>(self, col: C) -> SimpleExpr where C: IntoColumnRef { ... }

// omitting the where clause below to make it more concise ..

fn gt<R>(self, right: R) -> SimpleExpr;
fn gte<R>(self, right: R) -> SimpleExpr;
fn in_subquery(self, sel: SelectStatement) -> SimpleExpr;
fn in_tuples<V, I>(self, v: I) -> SimpleExpr;
fn is<R>(self, right: R) -> SimpleExpr;
fn is_in<V, I>(self, v: I) -> SimpleExpr;
fn is_not<R>(self, right: R) -> SimpleExpr;
fn is_not_in<V, I>(self, v: I) -> SimpleExpr;
fn is_not_null(self) -> SimpleExpr;
fn is_null(self) -> SimpleExpr;
fn left_shift<R>(self, right: R) -> SimpleExpr;
fn like<L>(self, like: L) -> SimpleExpr;
fn lt<R>(self, right: R) -> SimpleExpr;
fn lte<R>(self, right: R) -> SimpleExpr;
fn modulo<R>(self, right: R) -> SimpleExpr;
fn mul<R>(self, right: R) -> SimpleExpr;
fn ne<R>(self, right: R) -> SimpleExpr;
fn not(self) -> SimpleExpr;
fn not_between<A, B>(self, a: A, b: B) -> SimpleExpr;
fn not_equals<C>(self, col: C) -> SimpleExpr;
fn not_in_subquery(self, sel: SelectStatement) -> SimpleExpr;
fn not_like<L>(self, like: L) -> SimpleExpr;
fn or<R>(self, right: R) -> SimpleExpr;
fn right_shift<R>(self, right: R) -> SimpleExpr;
fn sub<R>(self, right: R) -> SimpleExpr;
fn bit_and<R>(self, right: R) -> SimpleExpr;
fn bit_or<R>(self, right: R) -> SimpleExpr;
}
  • Added ExprTrait to unify Expr and SimpleExpr methods
  • Added impl<T> ExprTrait for T where T: Into<SimpleExpr> to maintain backwards compatibility for all Into<SimpleExpr> types, such as Value and FunctionCall
  • Added trait PgExpr: ExprTrait: database specific expression for Postgres and impl PgExpr for FunctionCall, ColumnRef, Keyword, LikeExpr, Value
  • Added trait SqliteExpr: ExprTrait: database specific expression for SQLite and impl SqliteExpr for FunctionCall, ColumnRef, Keyword, LikeExpr, Value

Support of Postgres Vector #774โ€‹

Example:

assert_eq!(
Query::select()
.columns([Char::Character])
.from(Char::Table)
.and_where(
Expr::col(Char::Character).eq(Expr::val(pgvector::Vector::from(vec![1.0, 2.0])))
)
.to_string(PostgresQueryBuilder),
r#"SELECT "character" FROM "character" WHERE "character" = '[1,2]'"#
);

Support Partial Index #478โ€‹

  • Support partial index CREATE INDEX .. WHERE ..

Example (Postgres):

assert_eq!(
Index::create()
.unique()
.nulls_not_distinct()
.name("partial-index-glyph-image-not-null")
.table(Glyph::Table)
.col(Glyph::Image)
.and_where(Expr::col(Glyph::Image).is_not_null())
.to_string(PostgresQueryBuilder),
r#"CREATE UNIQUE INDEX "partial-index-glyph-image-not-null" ON "glyph" ("image") NULLS NOT DISTINCT WHERE "image" IS NOT NULL"#
);

Example (Sqlite):

assert_eq!(
Index::create()
.if_not_exists()
.unique()
.name("partial-index-glyph-image-not-null")
.table(Glyph::Table)
.col(Glyph::Image)
.and_where(Expr::col(Glyph::Image).is_not_null())
.to_string(SqliteQueryBuilder),
r#"CREATE UNIQUE INDEX IF NOT EXISTS "partial-index-glyph-image-not-null" ON "glyph" ("image") WHERE "image" IS NOT NULL"#
);

Get Null Valueโ€‹

This one may seem a little bummer, but it is type system problem. In order to support the Postgres protocol, SeaQuery's Value enum does not have a Null variant. This new Value::as_null method allows you to:

  1. construct a typed null value
  2. nullify a value
  3. define generic functions (impl Into<Value>)
let v = Value::Int(Some(2));
let n = v.as_null();

assert_eq!(n, Value::Int(None));

// one liner:
assert_eq!(Into::<Value>::into(2.2).as_null(), Value::Double(None));

Bitwise AND/OR Operators #841โ€‹

Examples:

let query = Query::select()
.expr(1.bit_and(2).eq(3))
.to_owned();

assert_eq!(
query.to_string(PostgresQueryBuilder),
r#"SELECT (1 & 2) = 3"#
);
let query = Query::select()
.expr(1.bit_or(2).eq(3))
.to_owned();

assert_eq!(
query.to_string(PostgresQueryBuilder),
r#"SELECT (1 | 2) = 3"#
);

Enhancementsโ€‹

  • #817 Replaced Educe with manual implementations
    • This is an effort to cut down compilation time
  • #844 Added GREATEST & LEAST function
  • #836 Added ValueType::enum_type_name()
  • #835 Removed "one common table" restriction on recursive CTE

sea-query-deriveโ€‹

We've finally done it! Removing the last bit of syn v1 from our dependency tree:

sea-query % cargo tree |grep 'syn '
โ”‚ โ””โ”€โ”€ syn v2.0.39
โ”‚ โ”‚ โ””โ”€โ”€ syn v2.0.39 (*)
โ”‚ โ””โ”€โ”€ syn v2.0.39 (*)
โ”œโ”€โ”€ syn v2.0.39 (*)
โ””โ”€โ”€ syn v2.0.39 (*)
โ”‚ โ”‚ โ””โ”€โ”€ syn v2.0.39 (*)
  • Merged #[enum_def] into sea-query-derive
  • #769 #[enum_def] now impl additional IdenStatic and AsRef<str>

sea-query-attrโ€‹

We've merged this crate into sea-query-derive, and they will be maintained together from now on.

  • Updated syn, heck and darling
  • sea-query-attr is now deprecated

Upgradesโ€‹

  • #798 Upgraded sqlx to 0.8
  • #798 Upgraded bigdecimal to 0.4
  • #802 Upgraded rusqlite to 0.32

Integration Examplesโ€‹

SeaQuery plays well with the other crates in the rust ecosystem.

Communityโ€‹

SeaQL.org is an independent open-source organization run by passionate ๏ธdevelopers. If you like our projects, please star โญ and share our repositories. If you feel generous, a small donation via GitHub Sponsor will be greatly appreciated, and goes a long way towards sustaining the organization ๐Ÿšข.

SeaQuery is a community driven project. We welcome you to participate, contribute and together build for Rust's future ๐Ÿฆ€.

Rustacean Sticker Pack ๐Ÿฆ€โ€‹

The Rustacean Sticker Pack is the perfect way to express your passion for Rust. Our stickers are made with a premium water-resistant vinyl with a unique matte finish. Stick them on your laptop, notebook, or any gadget to show off your love for Rust!

Moreover, all proceeds contributes directly to the ongoing development of SeaQL projects.

Sticker Pack Contents:

  • Logo of SeaQL projects: SeaQL, SeaORM, SeaQuery, Seaography, FireDBG
  • Mascot of SeaQL: Terres the Hermit Crab
  • Mascot of Rust: Ferris the Crab
  • The Rustacean word

Support SeaQL and get a Sticker Pack!

Rustacean Sticker Pack by SeaQL

What's new in SeaStreamer 0.5

ยท 5 min read
Chris Tsang
SeaQL Team

๐ŸŽ‰ We are pleased to release SeaStreamer 0.5.x!

Here is the summary of new features and enhancements:

sea-streamer-typesโ€‹

  • Added From<Url> and FromIterator<Url> for StreamerUri #28
  • Impl Default for Payload
  • Impl serde Serialize & Deserialize for StreamKey (enabled by the feature flag serde), so this is now possible:
#[derive(Serialize, Deserialize)]
struct MyStruct {
stream_key: StreamKey,
}

sea-streamer-socketโ€‹

  • The Socket library can now be compiled without the stdio backend #35

sea-streamer-redisโ€‹

  • Support nanosecond timestamp in Redis (under feature flag nanosecond-timestamp). Redis's default Stream ID resolution is millisecond, and it can be changed to nanosecond with RedisConnectOptions::set_timestamp_format:
    let mut options = RedisConnectOptions::default();
    options.set_timestamp_format(TimestampFormat::UnixTimestampNanos);
  • Added RedisConnectOptions::set_message_field to set custom message field (the default used to be msg):
    let mut options = RedisConnectOptions::default();
    options.set_message_field("event");
  • Added RedisProducer::send_with_ts to specify custom timestamp:
    producer.send_with_ts(&stream_key, timestamp, message)?;
  • Added RedisProducer::flush_immut. This method is same as RedisProducer::flush but without &mut self
  • Added RedisProducer::trim to perform XTRIM MAXLEN:
    producer.trim(&stream_key, maxlen).await?;
  • Fixed capacity overflow error in some cases

sea-streamer-fileโ€‹

  • Added a special SEA_STREAMER_WILDCARD stream key to subscribe to all streams in a file:
    let consumer: SeaConsumer = streamer
    .create_consumer(&[StreamKey::new(SEA_STREAMER_WILDCARD)?], options)
    .await?;

sea-streamer-fuseโ€‹

We've shipped the first component library for stream processing! It currently only has one class, StreamJoin.

It is designed to be used in stream replay. In live streaming, if you have multiple streams from different sources and you want to multiplex them together, you can use the awesome futures_concurrency crate's Merge, and it just works!

use futures_concurrency::{stream::Merge, vec::Merge as Merged};

let consumers: Vec<SeaConsumer> = vec![stream_a, stream_b];
let streams: Vec<SeaMessageStream<'a>> = consumers.iter_mut().map(|ss| ss.stream()).collect();
let merged: Merged<SeaMessageStream<'a>> = streams.merge();

stream_a and stream_b can be heterogeneous, meaning they can be Kafka, Redis or even File.

How about in replay? In replay, different streams can flow at different pace, and thus if we try to naively merge them, the messages would come out-of-order.

To solve this problem, you can use StreamJoin::muxed:

type LiveStream<'a> = Merged<SeaMessageStream<'a>>;
let joined: StreamJoin<LiveStream<'a>, SeaMessage<'a>, StreamErr<BackendErr>> = StreamJoin::muxed(merged);

StreamJoin::align must be called manually to specify which streams should be aligned. Otherwise, messages will be out of order until the first message of each key arrives. Imagine a severely delayed stream sending its first message one day later; it would invalidate everything that came before it. However, the issue lies with the delayed stream itself, not the others.

In the example below, messages from the fast stream will be buffered, until a message from the slow stream arrives.

fast | (1) (2) (3) (4) (5)
slow | (2) (6)

Messages 1, 2 from fast will be buffered, until 2 from the slow stream arrives. Likewise, messages 3, 4, 5 will be buffered until 6 arrives.

The StreamJoin component is generic, and can actually be used outside of SeaStreamer, the only requirement is that the thing we want to align implements sea_streamer::Message:

impl Message for MyMessage {
fn stream_key(&self) -> StreamKey { /* implement this */ }

fn timestamp(&self) -> Timestamp { /* implement this */ }

fn shard_id(&self) -> ShardId { /* doesn't matter */ }

fn sequence(&self) -> SeqNo { /* doesn't matter */ }

fn message(&self) -> Payload { /* doesn't matter */ }
}

Anecdoteโ€‹

Over the past year, we've been using SeaStreamer heavily in production and it served us well!

SeaStreamer File is really handy, because it supports live streaming and also duals as an archive, in which it can be rotated and uploaded to the data lake every day. It has replaced our use of Redis in some same-host mpmc streaming scenario.

Redis Streams is also super nice (fast and reliable) and especially easy with SeaStreamer. IMO it's been underrated, it became our default choice for cross-host streaming.

By the way, SeaStreamer File is used as the tracing file format in FireDBG.

Communityโ€‹

SeaQL.org is an independent open-source organization run by passionate ๏ธdevelopers. If you like our projects, please star โญ and share our repositories. If you feel generous, a small donation via GitHub Sponsor will be greatly appreciated, and goes a long way towards sustaining the organization ๐Ÿšข.

Rustacean Sticker Pack ๐Ÿฆ€โ€‹

The Rustacean Sticker Pack is the perfect way to express your passion for Rust. Our stickers are made with a premium water-resistant vinyl with a unique matte finish. Stick them on your laptop, notebook, or any gadget to show off your love for Rust!

Moreover, all proceeds contributes directly to the ongoing development of SeaQL projects.

Sticker Pack Contents:

  • Logo of SeaQL projects: SeaQL, SeaORM, SeaQuery, Seaography, FireDBG
  • Mascot of SeaQL: Terres the Hermit Crab
  • Mascot of Rust: Ferris the Crab
  • The Rustacean word

Support SeaQL and get a Sticker Pack!

Rustacean Sticker Pack by SeaQL

GraphQL based Admin Dashboard with Loco and Seaography

ยท 7 min read
Billy Chan
SeaQL Team

In this tutorial, we would develop a GraphQL based admin dashboard with Seaography and Loco.

Read our first and second tutorial of the series, Getting Started with Loco & SeaORM, Adding GraphQL Support to Loco with Seaography, if you haven't.

The full source code can be found here.

What is Seaographyโ€‹

Seaography is a GraphQL framework for building GraphQL resolvers using SeaORM entities. It ships with a CLI tool that can generate ready-to-compile Rust GraphQL servers from existing MySQL, Postgres and SQLite databases.

Setup React-Admin Frontend Boilerplateโ€‹

We use React-Admin as the frontend framework. It provides a convenient boilerplate to start with:

$ npm init react-admin frontend

Select the data provider you want to use, and validate with Enter:
โฏ None
I'll configure the data provider myself.

Select the auth provider you want to use, and validate with Enter:
โฏ Hard coded local username/password

Enter the name of a resource you want to add, and validate with Enter (leave empty to finish):
โฏ (Leave empty and press Enter)

How do you want to install the dependencies?
โฏ Using npm

Run the boilerplate then visit http://localhost:5173/, you should see the welcome page:

$ cd frontend

$ npm install

$ npm run dev

> dev
> vite

VITE v4.5.3 ready in 440 ms

โžœ Local: http://localhost:5173/

Now, we want to display the React-Admin data table template with mock data. First, we need to add ra-data-json-server dependency, it provides a ready-to-go mock data loader:

$ npm install ra-data-json-server

To prepare the mock data loader, we create a new file:

frontend/src/dataProvider.ts
import jsonServerProvider from 'ra-data-json-server';

export const dataProvider = jsonServerProvider('https://jsonplaceholder.typicode.com');

Then, we change the UI file:

frontend/src/index.tsx
+ import { Admin, Resource, ListGuesser, ShowGuesser } from 'react-admin';
+ import { dataProvider } from './dataProvider';

ReactDOM.createRoot(document.getElementById('root')!).render(
- <React.StrictMode>
- <App />
- </React.StrictMode>
+ <Admin dataProvider={dataProvider}>
+ <Resource name="users" list={ListGuesser} show={ShowGuesser} />
+ </Admin>
);

Run the boilerplate now you should see the user listing page:

$ npm run dev

Click on each row to view its detail page.

Add NPM dependencyโ€‹

Next, we start to integrate our Loco and Seaography backend with React-Admin frontend. We use axios for sending POST request to our GraphQL backend:

$ npm install axios

GraphQL Data Providerโ€‹

Then, we can start implementing the GraphQL data provider by replacing the content of dataProvider.ts:

frontend/src/dataProvider.ts
- import jsonServerProvider from 'ra-data-json-server';
-
- export const dataProvider = jsonServerProvider('https://jsonplaceholder.typicode.com');

Integrating with our GraphQL endpoint at http://localhost:3000/api/graphql. We implemented two handler below, one fetch data for the post listing and the other to fetch data for a single post:

frontend/src/dataProvider.ts
import { DataProvider } from "react-admin";
import axios from 'axios';

const apiUrl = 'http://localhost:3000/api/graphql';

export const dataProvider: DataProvider = {
// Fetch data for post listing
getList: (resource, params) => {
// Paginator status
const { page, perPage } = params.pagination;
// Sorter status
const { field, order } = params.sort;

// POST request to GraphQL endpoint
return axios.post(apiUrl, {
query: `
query {
notes (
orderBy: { ${field}: ${order} },
pagination: { page: { limit: ${perPage}, page: ${page - 1} }}
) {
nodes {
id
title
createdAt
updatedAt
}
paginationInfo {
pages
current
offset
total
}
}
}
`
})
.then((response) => {
// Unwrap the response
const { nodes, paginationInfo } = response.data.data.notes;
// Return the data array and total number of pages
return {
data: nodes,
total: paginationInfo.total,
};
});
},

// Fetch data for a single post
getOne: (resource, params) => {
// POST request to GraphQL endpoint
return axios.post(apiUrl, {
query: `
query {
notes(filters: {id: {eq: ${params.id}}}) {
nodes {
id
title
content
createdAt
updatedAt
}
}
}
`
})
.then((response) => {
// Unwrap the response
const { nodes } = response.data.data.notes;
// Return the one and only data
return {
data: nodes[0],
};
});
},

getMany: (resource, params) => { },

getManyReference: (resource, params) => { },

update: (resource, params) => { },

updateMany: (resource, params) => { },

create: (resource, params) => { },

delete: (resource, params) => { },

deleteMany: (resource, params) => { },
};

Customize React-Admin Frontendโ€‹

Replace the React-Admin template frontend with our own custom UI to list all notes from the database.

frontend/src/index.tsx
- ReactDOM.createRoot(document.getElementById('root')!).render(
- <Admin dataProvider={dataProvider}>
- <Resource name="users" list={ListGuesser} show={ShowGuesser} />
- </Admin>
- );

Implement the list and details page with specific columns:

frontend/src/index.tsx
import ReactDOM from 'react-dom/client';
import { Admin, Resource, List, Datagrid, TextField, Show, SimpleShowLayout } from 'react-admin';
import { dataProvider } from './dataProvider';

const PostList = () => (
<List>
<Datagrid bulkActionButtons={false}>
<TextField source="id" />
<TextField source="title" />
<TextField source="content" />
<TextField source="createdAt" />
<TextField source="updatedAt" />
</Datagrid>
</List>
);

const PostShow = () => (
<Show>
<SimpleShowLayout>
<TextField source="id" />
<TextField source="title" />
<TextField source="content" />
<TextField source="createdAt" />
<TextField source="updatedAt" />
</SimpleShowLayout>
</Show>
);

ReactDOM.createRoot(document.getElementById('root')!).render(
<Admin dataProvider={dataProvider}>
<Resource name="posts" list={PostList} show={PostShow} />
</Admin>
);

Auth Free GraphQL Endpointโ€‹

Disabled user authentication on GraphQL POST handler endpoint for convenient:

backend/src/controllers/graphql.rs
async fn graphql_handler(
- _auth: auth::JWT,
State(ctx): State<AppContext>,
req: Request<Body>,
) -> Result<Response> {
const DEPTH: usize = 1_000;
const COMPLEXITY: usize = 1_000;
// Construct the the GraphQL query root
let schema = query_root::schema(ctx.db.clone(), DEPTH, COMPLEXITY).unwrap();
// GraphQL handler
let mut graphql_handler = async_graphql_axum::GraphQL::new(schema);
// Execute GraphQL request and fetch the results
let res = graphql_handler.call(req).await.unwrap();

Ok(res)
}

Put It into Action!โ€‹

Run the React-Admin frontend:

$ cd frontend
$ npm run dev

Run the Loco backend:

$ cd backend
$ cargo run start

Visit http://localhost:5173/, you should see the post listing page:

We are fetching data from the GraphQL backend:

Click on column header to sort by the column in ascending or descending order:

Click on each row to view its detail page:

Conclusionโ€‹

Adding GraphQL support to Loco application is easy with the help of Seaography. It is an ergonomic library that perfectly integrate with any frontend framework. This tutorial only cover the basic integration of LOco and Seaography including only the querying of data via the GraphQL endpoint. GraphQL mutations are not demonstrated and we leave it for you to code it out!

SQL Server Supportโ€‹

SQL Server for SeaORM is now available as a closed beta. If you are interested`, please signup here.

Migrating from sea-orm to sea-orm-x is straightforward with two simple steps. First, update the existing sea-orm dependency to sea-orm-x and enable the sqlz-mssql feature. Note that you might need to patch SeaORM dependency for the upstream dependencies.

Cargo.toml
sea-orm = { path = "<SEA_ORM_X_ROOT>/sea-orm-x", features = ["runtime-async-std-rustls", "sqlz-mssql"] }
sea-orm-migration = { path = "<SEA_ORM_X_ROOT>/sea-orm-x/sea-orm-migration" }

# Patch SeaORM dependency for the upstream dependencies
[patch.crates-io]
sea-orm = { path = "<SEA_ORM_X_ROOT>/sea-orm-x" }
sea-orm-migration = { path = "<SEA_ORM_X_ROOT>/sea-orm-x/sea-orm-migration" }

Second, update the connection string to connect to the MSSQL database.

# If the schema is `dbo`, simply write:
mssql://username:password@host/database

# Or, specify the schema name by providing an extra `currentSchema` query param.
mssql://username:password@host/database?currentSchema=my_schema

# You can trust peer certificate by providing an extra trustCertificate query param.
mssql://username:password@host/database?trustCertificate=true

SeaORM X has full Loco support and integrate seamlessly with many web frameworks:

  • Actix
  • Axum
  • Async GraphQL
  • jsonrpsee
  • Loco
  • Poem
  • Salvo
  • Tonic

Happy Coding!

Announcing SeaORM 1.0 ๐Ÿš

ยท 12 min read
SeaQL Team
Chris Tsang
SeaORM 1.0 Banner

๐ŸŽ‰ We are pleased to release SeaORM 1.0 today! This is an special occasion for us, so this blog post will be a little more than a release notes.

Our Journeyโ€‹

It's nearly been three years since our SeaORM 0.2 release. At that time, we set out to build a SQL ORM for the async Rust ecosystem, bringing together the best crates to allow developers to build high-performance and robust web services.

We would like to thank all early-adoptors, contributors and sponsors of SeaORM. Thank you to all our users for your trust and for being a part of our journey.

Today, many startups and companies are building applications in Rust, with SeaORM being an integral part of the stack. We are particularly pleased to see application frameworks, such as Loco, that provide tight integration with SeaORM, thereby offering a streamlined developer experience.

I think we've achieved our initial goal! We understand that maturity and stability are important considerations for teams when making technology choices. Therefore, we believe it's now time to stabilize SeaORM.

New Featuresโ€‹

Here are the highlights of some new features and enhancements introduced in SeaORM 1.0.

Refreshed migration schema definitionโ€‹

#2099 Thanks to the clever design made by Loco, we've refreshed the schema definition syntax.

An old migration script looks like this:

#[async_trait::async_trait]
impl MigrationTrait for Migration {
async fn up(&self, manager: &SchemaManager) -> Result<(), DbErr> {
manager
.create_table(
Table::create()
.table(Users::Table)
.if_not_exists()
.col(
ColumnDef::new(Users::Id)
.integer()
.not_null()
.auto_increment()
.primary_key(),
)
.col(ColumnDef::new(Users::Pid).uuid().not_null())
.col(ColumnDef::new(Users::Email).string().not_null().unique_key())
// ...
}
}

Now, using the new schema helpers, you can define the schema with a simplified syntax!

// Remember to import `sea_orm_migration::schema::*`
use sea_orm_migration::{prelude::*, schema::*};

#[derive(DeriveMigrationName)]
pub struct Migration;

#[async_trait::async_trait]
impl MigrationTrait for Migration {
async fn up(&self, manager: &SchemaManager) -> Result<(), DbErr> {
manager
.create_table(
Table::create()
.table(Users::Table)
.if_not_exists()
.col(pk_auto(Users::Id)) // Primary key with auto-increment
.col(uuid(Users::Pid)) // UUID column
.col(string_uniq(Users::Email)) // String column with unique and not null constraint
.col(string(Users::Password)) // String column
.col(string(Users::ApiKey).unique_key())
.col(string(Users::Name))
.col(string_null(Users::ResetToken)) // Nullable string column
.col(timestamp_null(Users::ResetSentAt)) // Nullable timestamp column
.col(string_null(Users::EmailVerificationToken))
.col(timestamp_null(Users::EmailVerificationSentAt))
.col(timestamp_null(Users::EmailVerifiedAt))
.to_owned(),
)
.await
}

// ...
}

There are three variants for each commonly used column type:

  • <COLUMN_TYPE>() helper function, e.g. string(), define a non-null string column
  • <COLUMN_TYPE>_null() helper function, e.g. string_null(), define a nullable string column
  • <COLUMN_TYPE>_uniq() helper function, e.g. string_uniq(), define a non-null and unique string column

The new schema helpers can be used by importing sea_orm_migration::schema::*. The migration library is fully backward compatible, so there is no rush to migrate old scripts. The new syntax is recommended for new scripts, and all examples in the SeaORM repository have been updated for demonstration. For advanced use cases, the old SeaQuery syntax can still be used.

Reworked SQLite Type Mappingsโ€‹

sea-orm#2077 sea-query#735 sea-schema#117 We've reworked the type mappings for SQLite across the SeaQL ecosystem, such that SeaQuery and SeaSchema are now reciprocal to each other. Migrations written with SeaQuery can be rediscovered by sea-orm-cli and generate compatible entities! In other words, the roundtrip is complete.

Data types will be mapped to SQLite types with a custom naming scheme following SQLite's affinity rule:

  • INTEGER: integer, tiny_integer, small_integer, big_integer and boolean are stored as integer
  • REAL: float, double, decimal and money are stored as real
  • BLOB: blob and varbinary_blob are stored as blob
  • TEXT: all other data types are stored as text, including string, char, text, json, uuid, date, time, datetime, timestamp, etc.

The full type mapping table is documented here. For more information, please refer to our previous blog post.

Alternative casing support for EntityModel & ActiveEnumโ€‹

#2170 Added rename_all attribute to DeriveEntityModel & DeriveActiveEnum

#[derive(DeriveEntityModel)]
#[sea_orm(table_name = "user", rename_all = "camelCase")]
pub struct Model {
#[sea_orm(primary_key)]
id: i32,
first_name: String, // firstName
#[sea_orm(column_name = "lAsTnAmE")]
last_name: String, // lAsTnAmE
}

#[derive(EnumIter, DeriveActiveEnum)]
#[sea_orm(rs_type = "String", db_type = "String(StringLen::None)", rename_all = "camelCase")]
pub enum TestEnum {
DefaultVariant, // defaultVariant
#[sea_orm(rename = "kebab-case")]
VariantKebabCase, // variant-kebab-case
#[sea_orm(rename = "snake_case")]
VariantSnakeCase, // variant_snake_case
#[sea_orm(string_value = "CuStOmStRiNgVaLuE")]
CustomStringValue, // CuStOmStRiNgVaLuE
}

Other Enhancementsโ€‹

  • #2185 PrimaryKeyArity trait with ARITY Constant
fn get_arity_of<E: EntityTrait>() -> usize {
E::PrimaryKey::iter().count() // before; runtime
<<E::PrimaryKey as PrimaryKeyTrait>::ValueType as PrimaryKeyArity>::ARITY // now; compile-time
}
  • #2137 DerivePartialModel macro attribute entity now supports syn::Type
#[derive(DerivePartialModel)]
#[sea_orm(entity = "<entity::Model as ModelTrait>::Entity")]
struct EntityNameNotAIdent {
#[sea_orm(from_col = "foo2")]
_foo: i32,
#[sea_orm(from_col = "bar2")]
_bar: String,
}
  • #2146 Added RelationDef::from_alias()
#[derive(DeriveIden)]
pub struct Cf;

assert_eq!(
cake::Entity::find()
.join_as(
JoinType::LeftJoin,
cake_filling::Relation::Cake.def().rev(),
Cf
)
.join(
JoinType::LeftJoin,
cake_filling::Relation::Filling.def().from_alias(Cf)
)
.build(DbBackend::MySql)
.to_string(),
[
"SELECT `cake`.`id`, `cake`.`name` FROM `cake`",
"LEFT JOIN `cake_filling` AS `cf` ON `cake`.`id` = `cf`.`cake_id`",
"LEFT JOIN `filling` ON `cf`.`filling_id` = `filling`.`id`",
]
.join(" ")
);
  • #2256 Added non-TLS runtime
  • #2244 Added Insert::on_conflict_do_nothing
  • #2255 Migration schema nullable column set NULL explicitly
  • #2194 Added ActiveValue::set_if_not_equals()
  • #2197 Added ActiveValue::try_as_ref()
  • #2228 Added QuerySelect::order_by_with_nulls
  • #2233 Expose get_xxx_connection_pool by default
  • #2148 Added QueryResult::column_names
  • #2199 [sea-orm-macro] Add @generated in generated code
  • #1665 [sea-orm-macro] Qualify traits in DeriveActiveModel macro
  • #2064 [sea-orm-cli] Fix migrate generate on empty mod.rs files

Release Planningโ€‹

SQLx 0.8 is recently released! We want to upgrade from 0.7 as soon as possible. However, since sea-orm 1.0 has been in release candidate status for a while, and considering the breaking changes in sqlx 0.8, we decided to keep sea-orm 1.0 on sqlx 0.7.

We plan to release sea-orm 1.1 on sqlx 0.8 soon. To avoid unplanned build failures, we recommend all users to specify SeaORM dependency with tilde requirement:

sea-orm = { version = "~1.0" }

According to the Cargo Book, this will prevent automatic upgrade to 1.1, so you can perform the upgrade at a convenient time.

If you've been depending on sea-orm 0.12, you're recommended to upgrade to 1.0 today. As they're both using sqlx 0.7, this upgrade focuses on SeaORM. And then upgrade to 1.1 down the line, which will then focus on SQLx, as we won't introduce any breaking changes. We recommend taking it one step at a time!

SeaORM 1.x will be maintained for at least 1 year. By then, we'll decide whether we want to release 2.0 and/or extend the lifecycle of 1.x.

Our Futureโ€‹

You may ask, does this mean SeaORM is "done"? No, not at all! SeaORM 1.0 provides us and the community a solid foundation to build more ambitious features around SeaORM. Here are some facets of our vision:

  1. GraphQL support via Seaography. We want to further develop Seaography to allow developers to turn a set of SeaORM entities into a fully-fledged GraphQL server!

  2. Admin dashboard. We want to build a first-class frontend framework for SeaORM. It will be a breeze to develop admin / client portals with SeaORM!

  3. Data science / analytics. SeaORM's been focusing on OLTP for now, and we're aware that people also have been using SeaORM for OLAP workloads. We want to provide better integration with dataframe libraries and develop more features tailored for data science and engineering.

  4. Scale-out features. We want to develop features to help with scaling applications, e.g. sharding, caching and multi-tenancy.

SQL Server Supportโ€‹

We've been planning SQL Server for SeaORM for a while, and SQL Server is finally coming to SeaORM 1.0! It will first be offered as a closed beta to our partners. If you are interested, please join our waiting list.

A small donation will be greatly appreciated, and goes a long way towards sustaining the organization.

A big shout out to our sponsors ๐Ÿ˜‡:

Gold Sponsorsโ€‹

osmos offers a data ingestion platform to streamline data ingestion, transformation and workflow management, and they're using SeaQL libraries under the hood!

Contact us if you also want to become a company sponsor and be featured here.

GitHub Sponsorsโ€‹

Afonso Barracha

Dean Sheather

Marcus Buffett

Kentaro Tanaka

Naoki Ikeguchi

Miles Granger

Data Intuitive

Marlon Mueller-Soppart

Anshul Sanghi

MasakiMiyazaki

Manfred Lee

KallyDev

ellik159

Caido

Coolpany SE

Rustacean Sticker Pack ๐Ÿฆ€โ€‹

The Rustacean Sticker Pack is the perfect way to express your passion for Rust. Our stickers are made with a premium water-resistant vinyl with a unique matte finish. Stick them on your laptop, notebook, or any gadget to show off your love for Rust!

Moreover, all proceeds contributes directly to the ongoing development of SeaQL projects.

Sticker Pack Contents:

  • Logo of SeaQL projects: SeaQL, SeaORM, SeaQuery, Seaography, FireDBG
  • Mascot of SeaQL: Terres the Hermit Crab
  • Mascot of Rust: Ferris the Crab
  • The Rustacean word

Support SeaQL and get a Sticker Pack!

Rustacean Sticker Pack by SeaQL

Adding GraphQL Support to Loco with Seaography

ยท 9 min read
Billy Chan
SeaQL Team

In this tutorial, we would add a GraphQL endpoint with Seaography based on our Loco starter application. Read our first tutorial of the series, Getting Started with Loco & SeaORM, if you haven't.

The full source code can be found here.

What is Seaographyโ€‹

Seaography is a GraphQL framework for building GraphQL resolvers using SeaORM entities. It ships with a CLI tool that can generate ready-to-compile Rust GraphQL servers from existing MySQL, Postgres and SQLite databases.

Adding Dependencyโ€‹

Modify Cargo.toml and add a few more dependencies: seaography, async-graphql, async-graphql-axum and lazy_static.

loco_seaography/Cargo.toml
seaography = { version = "1.0.0-rc.4", features = ["with-decimal", "with-chrono"] }
async-graphql = { version = "7.0", features = ["decimal", "chrono", "dataloader", "dynamic-schema"] }
async-graphql-axum = { version = "7.0" }
lazy_static = { version = "1.4" }
tower-service = { version = "0.3" }

Setting up SeaORM Entities for Seaographyโ€‹

Seaography Entities are basically SeaORM Entities with some additions. They are fully compatible with SeaORM.

You can generate Seaography Entities by using sea-orm-cli with the extra --seaography flag.

sea-orm-cli generate entity -o src/models/_entities -u postgres://loco:loco@localhost:5432/loco_seaography_development --seaography
loco_seaography/src/models/_entities/notes.rs
use sea_orm::entity::prelude::*;
use serde::{Serialize, Deserialize};

#[derive(Clone, Debug, PartialEq, DeriveEntityModel, Eq, Serialize, Deserialize)]
#[sea_orm(table_name = "notes")]
pub struct Model {
pub created_at: DateTime,
pub updated_at: DateTime,
#[sea_orm(primary_key)]
pub id: i32,
pub title: Option<String>,
pub content: Option<String>,
}

#[derive(Copy, Clone, Debug, EnumIter, DeriveRelation)]
pub enum Relation {
#[sea_orm(has_many = "super::files::Entity")]
Files,
}

impl Related<super::files::Entity> for Entity {
fn to() -> RelationDef {
Relation::Files.def()
}
}

+ // Defining `RelatedEntity` to relate one entity with another
+ #[derive(Copy, Clone, Debug, EnumIter, DeriveRelatedEntity)]
+ pub enum RelatedEntity {
+ #[sea_orm(entity = "super::files::Entity")]
+ Files,
+ }

We can see that a new enum RelatedEntity is generated in the Entity files. This helps Seaography to locate the related Entities for making relational queries.

Implementing GraphQL Query Rootโ€‹

We have finished setting up SeaORM entity for Seaography. Now, we implement the query root of Seaography where we bridge SeaORM and Async GraphQL with the help of Seaography.

loco_seaography/src/graphql/query_root.rs
use async_graphql::dynamic::*;
use sea_orm::DatabaseConnection;
use seaography::{Builder, BuilderContext};

use crate::models::_entities::*;

lazy_static::lazy_static! { static ref CONTEXT: BuilderContext = BuilderContext::default(); }

pub fn schema(
database: DatabaseConnection,
depth: usize,
complexity: usize,
) -> Result<Schema, SchemaError> {
// Builder of Seaography query root
let mut builder = Builder::new(&CONTEXT, database.clone());
// Register SeaORM entities
seaography::register_entities!(
builder,
// List all models we want to include in the GraphQL endpoint here
[files, notes, users]
);
// Configure async GraphQL limits
let schema = builder
.schema_builder()
// The depth is the number of nesting levels of the field
.limit_depth(depth)
// The complexity is the number of fields in the query
.limit_complexity(complexity);
// Finish up with including SeaORM database connection
schema.data(database).finish()
}

Writing Loco Controller to Handle GraphQL Endpointโ€‹

For convenience we use the built-in GraphQL playground UI in async-graphql to test the GraphQL endpoint. And handle the GraphQL request with async_graphql_axum and Seaography.

loco_seaography/src/controllers/graphql.rs
use async_graphql::http::{playground_source, GraphQLPlaygroundConfig};
use axum::{body::Body, extract::Request};
use loco_rs::prelude::*;
use tower_service::Service;

use crate::graphql::query_root;

// GraphQL playground UI
async fn graphql_playground() -> Result<Response> {
// The `GraphQLPlaygroundConfig` take one parameter
// which is the URL of the GraphQL handler: `/api/graphql`
let res = playground_source(GraphQLPlaygroundConfig::new("/api/graphql"));

Ok(Response::new(res.into()))
}

async fn graphql_handler(
State(ctx): State<AppContext>,
req: Request<Body>,
) -> Result<Response> {
const DEPTH: usize = 10;
const COMPLEXITY: usize = 100;
// Construct the the GraphQL query root
let schema = query_root::schema(ctx.db.clone(), DEPTH, COMPLEXITY).unwrap();
// GraphQL handler
let mut graphql_handler = async_graphql_axum::GraphQL::new(schema);
// Execute GraphQL request and fetch the results
let res = graphql_handler.call(req).await.unwrap();

Ok(res)
}

pub fn routes() -> Routes {
// Define route
Routes::new()
// We put all GraphQL route behind `graphql` prefix
.prefix("graphql")
// GraphQL playground page is a GET request
.add("/", get(graphql_playground))
// GraphQL handler is a POST request
.add("/", post(graphql_handler))
}

Opening GraphQL Playgroundโ€‹

Compile and run the Loco application, then visit http://localhost:3000/api/graphql.

$ cargo run start

Finished `dev` profile [unoptimized + debuginfo] target(s) in 0.60s
Running `target/debug/loco_seaography-cli start`
2024-06-24T08:04:52.173924Z INFO app: loco_rs::config: loading environment from selected_path="config/development.yaml" environment=development
2024-06-24T08:04:52.180447Z WARN app: loco_rs::boot: pretty backtraces are enabled (this is great for development but has a runtime cost for production. disable with `logger.pretty_backtrace` in your config yaml) environment=development
2024-06-24T08:04:52.272392Z INFO app: loco_rs::db: auto migrating environment=development
2024-06-24T08:04:52.275198Z INFO app: sea_orm_migration::migrator: Applying all pending migrations environment=development
2024-06-24T08:04:52.280720Z INFO app: sea_orm_migration::migrator: No pending migrations environment=development
2024-06-24T08:04:52.281280Z INFO app: loco_rs::boot: initializers loaded initializers="" environment=development
2024-06-24T08:04:52.308827Z INFO app: loco_rs::controller::app_routes: [GET] /api/_ping environment=development
2024-06-24T08:04:52.308936Z INFO app: loco_rs::controller::app_routes: [GET] /api/_health environment=development
2024-06-24T08:04:52.309021Z INFO app: loco_rs::controller::app_routes: [GET] /api/notes environment=development
2024-06-24T08:04:52.309088Z INFO app: loco_rs::controller::app_routes: [POST] /api/notes environment=development
2024-06-24T08:04:52.309158Z INFO app: loco_rs::controller::app_routes: [GET] /api/notes/:id environment=development
2024-06-24T08:04:52.309234Z INFO app: loco_rs::controller::app_routes: [DELETE] /api/notes/:id environment=development
2024-06-24T08:04:52.309286Z INFO app: loco_rs::controller::app_routes: [POST] /api/notes/:id environment=development
2024-06-24T08:04:52.309334Z INFO app: loco_rs::controller::app_routes: [POST] /api/auth/register environment=development
2024-06-24T08:04:52.309401Z INFO app: loco_rs::controller::app_routes: [POST] /api/auth/verify environment=development
2024-06-24T08:04:52.309471Z INFO app: loco_rs::controller::app_routes: [POST] /api/auth/login environment=development
2024-06-24T08:04:52.309572Z INFO app: loco_rs::controller::app_routes: [POST] /api/auth/forgot environment=development
2024-06-24T08:04:52.309662Z INFO app: loco_rs::controller::app_routes: [POST] /api/auth/reset environment=development
2024-06-24T08:04:52.309752Z INFO app: loco_rs::controller::app_routes: [GET] /api/user/current environment=development
2024-06-24T08:04:52.309827Z INFO app: loco_rs::controller::app_routes: [POST] /api/files/upload/:notes_id environment=development
2024-06-24T08:04:52.309910Z INFO app: loco_rs::controller::app_routes: [GET] /api/files/list/:notes_id environment=development
2024-06-24T08:04:52.309997Z INFO app: loco_rs::controller::app_routes: [GET] /api/files/view/:files_id environment=development
2024-06-24T08:04:52.310088Z INFO app: loco_rs::controller::app_routes: [GET] /api/graphql environment=development
2024-06-24T08:04:52.310172Z INFO app: loco_rs::controller::app_routes: [POST] /api/graphql environment=development
2024-06-24T08:04:52.310469Z INFO app: loco_rs::controller::app_routes: [Middleware] Adding limit payload data="5mb" environment=development
2024-06-24T08:04:52.310615Z INFO app: loco_rs::controller::app_routes: [Middleware] Adding log trace id environment=development
2024-06-24T08:04:52.310934Z INFO app: loco_rs::controller::app_routes: [Middleware] Adding cors environment=development
2024-06-24T08:04:52.311008Z INFO app: loco_rs::controller::app_routes: [Middleware] Adding etag layer environment=development

โ–„ โ–€
โ–€ โ–„
โ–„ โ–€ โ–„ โ–„ โ–„โ–€
โ–„ โ–€โ–„โ–„
โ–„ โ–€ โ–€ โ–€โ–„โ–€โ–ˆโ–„
โ–€โ–ˆโ–„
โ–„โ–„โ–„โ–„โ–„โ–„โ–„ โ–„โ–„โ–„โ–„โ–„โ–„โ–„โ–„โ–„ โ–„โ–„โ–„โ–„โ–„โ–„โ–„โ–„โ–„โ–„โ–„ โ–„โ–„โ–„โ–„โ–„โ–„โ–„โ–„โ–„ โ–€โ–€โ–ˆ
โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ โ–ˆโ–ˆโ–ˆ โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ โ–ˆโ–ˆโ–ˆ โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ โ–ˆโ–ˆโ–ˆ โ–€โ–ˆ
โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ โ–ˆโ–ˆโ–ˆ โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ โ–€โ–€โ–€ โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ โ–ˆโ–ˆโ–ˆ โ–„โ–ˆโ–„
โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ โ–ˆโ–ˆโ–ˆ โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ โ–ˆโ–ˆโ–ˆ โ–ˆโ–ˆโ–ˆโ–ˆโ–„
โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ โ–ˆโ–ˆโ–ˆ โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ โ–„โ–„โ–„ โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ โ–ˆโ–ˆโ–ˆ โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ
โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ โ–ˆโ–ˆโ–ˆ โ–ˆโ–ˆโ–ˆโ–ˆ โ–ˆโ–ˆโ–ˆ โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ โ–ˆโ–ˆโ–ˆ โ–ˆโ–ˆโ–ˆโ–ˆโ–€
โ–€โ–€โ–€โ–ˆโ–ˆโ–„ โ–€โ–€โ–€โ–€โ–€โ–€โ–€โ–€โ–€โ–€ โ–€โ–€โ–€โ–€โ–€โ–€โ–€โ–€โ–€โ–€ โ–€โ–€โ–€โ–€โ–€โ–€โ–€โ–€โ–€โ–€ โ–ˆโ–ˆโ–€
โ–€โ–€โ–€โ–€โ–€โ–€โ–€โ–€โ–€โ–€โ–€โ–€โ–€โ–€โ–€โ–€โ–€โ–€โ–€โ–€โ–€โ–€โ–€โ–€โ–€โ–€โ–€โ–€โ–€โ–€โ–€โ–€โ–€โ–€โ–€โ–€โ–€โ–€โ–€
https://loco.rs

environment: development
database: automigrate
logger: debug
compilation: debug
modes: server

listening on [::]:3000

Creating Notesโ€‹

Create a new notes with the GraphQL mutator.

mutation {
notesCreateOne(
data: {
id: 1
title: "Notes 001"
content: "Content 001"
createdAt: "2024-06-24 00:00:00"
updatedAt: "2024-06-24 00:00:00"
}
) {
id
title
content
createdAt
updatedAt
}
}

Querying Notesโ€‹

Query notes with its related files.

query {
notes {
nodes {
id
title
content
files {
nodes {
id
filePath
}
}
}
}
}

Adding User Authentication to GraphQL Endpointโ€‹

Our GraphQL handler can be accessed without user authentication. Next, we want to only allow logged in user to access the GraphQL handler.

To do so, we add _auth: auth::JWT to the graphql_handler function.

loco_seaography/src/controllers/graphql.rs
async fn graphql_handler(
+ _auth: auth::JWT,
State(ctx): State<AppContext>,
req: Request<Body>,
) -> Result<Response> {
const DEPTH: usize = 10;
const COMPLEXITY: usize = 100;
// Construct the the GraphQL query root
let schema = query_root::schema(ctx.db.clone(), DEPTH, COMPLEXITY).unwrap();
// GraphQL handler
let mut graphql_handler = async_graphql_axum::GraphQL::new(schema);
// Execute GraphQL request and fetch the results
let res = graphql_handler.call(req).await.unwrap();

Ok(res)
}

Then, run the Loco application and visit the GraphQL playground again. You should see unauthorize error.

Adding Authentication header to GraphQL Playgroundโ€‹

First, we generate a valid authorization token by logging in the user account with the corresponding email and password:

$ curl --location 'http://localhost:3000/api/auth/login' \
--data-raw '{
"email": "cwchan.billy@gmail.com",
"password": "password"
}'

{
"token": "eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzUxMiJ9.eyJwaWQiOiIwN2NjMDk5Ni03YWYxLTQ5YmYtYmY2NC01OTg4ZjFhODM2OTkiLCJleHAiOjE3MTk4MjIzMzN9.CgKp_aE-DyAuBJIvFGJ6l68ooAlEiJGhjWeaetDtHrupaYDm0ldVxf24vj3fPgkCqZ_njv2129n2pSCzHOjaow",
"pid": "07cc0996-7af1-49bf-bf64-5988f1a83699",
"name": "Billy",
"is_verified": true
}

Go to the setting page of GraphQL playground. And add a new header under request. globalHeaders:

Then, we can access GraphQL handler as usual.

Conclusionโ€‹

Adding GraphQL support to Loco application is easy with the help of Seaography. It is an ergonomic library that turns SeaORM entities into GraphQL nodes and provides a set of utilities and combined with a code generator makes GraphQL API building a breeze.

SQL Server Supportโ€‹

SQL Server for SeaORM is now available as a closed beta. If you are interested`, please signup here.

Migrating from sea-orm to sea-orm-x is straightforward with two simple steps. First, update the existing sea-orm dependency to sea-orm-x and enable the sqlz-mssql feature. Note that you might need to patch SeaORM dependency for the upstream dependencies.

Cargo.toml
sea-orm = { path = "<SEA_ORM_X_ROOT>/sea-orm-x", features = ["runtime-async-std-rustls", "sqlz-mssql"] }
sea-orm-migration = { path = "<SEA_ORM_X_ROOT>/sea-orm-x/sea-orm-migration" }

# Patch SeaORM dependency for the upstream dependencies
[patch.crates-io]
sea-orm = { path = "<SEA_ORM_X_ROOT>/sea-orm-x" }
sea-orm-migration = { path = "<SEA_ORM_X_ROOT>/sea-orm-x/sea-orm-migration" }

Second, update the connection string to connect to the MSSQL database.

# If the schema is `dbo`, simply write:
mssql://username:password@host/database

# Or, specify the schema name by providing an extra `currentSchema` query param.
mssql://username:password@host/database?currentSchema=my_schema

# You can trust peer certificate by providing an extra trustCertificate query param.
mssql://username:password@host/database?trustCertificate=true

SeaORM X has full Loco support and integrate seamlessly with many web frameworks:

  • Actix
  • Axum
  • Async GraphQL
  • jsonrpsee
  • Loco
  • Poem
  • Salvo
  • Tonic

Happy Coding!

Getting Started with Loco & SeaORM

ยท 13 min read
Billy Chan
SeaQL Team

In this tutorial, we would create a REST notepad backend starting from scratch and adding a new REST endpoint to handle file uploads in Loco.

The full source code can be found here. The documentation of the REST API is available here.

What is Loco?โ€‹

Loco is a Rails inspired web framework for Rust. It includes many Rails feature with Rust ergonomics. Loco integrates seamlessly with SeaORM, offering a first-class development experience!

  • Controllers and routing via axum
  • Models, migration, and ActiveRecord via SeaORM
  • Views via serde
  • Seamless, Background jobs via sidekiq-rs, multi modal: in process, out of process, async via Tokio
  • ...and more

REST API Starter Templateโ€‹

Install loco-cli:

cargo install loco-cli

The loco-cli provides three starter templates:

  • SaaS Starter
  • Rest API Starter
  • Lightweight Service Starter

For this tutorial, we want the "Rest API Starter" template:

$ loco new

โœ” You are inside a git repository. Do you wish to continue? ยท Yes
โœ” App name? ยท loco_starter
โœ” What would you like to build? ยท Rest API (with DB and user auth)

๐Ÿš‚ Loco app generated successfully in:
/sea-orm/examples/loco_starter

Next, we need to setup our PostgreSQL database.

docker run -d -p 5432:5432 -e POSTGRES_USER=loco -e POSTGRES_DB=loco_starter_development -e POSTGRES_PASSWORD="loco" postgres:15.3-alpine

If you want to use MySQL or SQLite as the database. Please update the database.uri configuration in loco_starter/config/development.yaml. And enable the corresponding database backend feature flag of SeaORM in loco_starter/Cargo.toml.

Now, start our REST application:

$ cargo loco start

Finished `dev` profile [unoptimized + debuginfo] target(s) in 1m 42s
Running `target/debug/loco_starter-cli start`
2024-05-20T06:56:42.724350Z INFO app: loco_rs::config: loading environment from selected_path="config/development.yaml" environment=development
2024-05-20T06:56:42.740338Z WARN app: loco_rs::boot: pretty backtraces are enabled (this is great for development but has a runtime cost for production. disable with `logger.pretty_backtrace` in your config yaml) environment=development
2024-05-20T06:56:42.833747Z INFO app: loco_rs::db: auto migrating environment=development
2024-05-20T06:56:42.845983Z INFO app: sea_orm_migration::migrator: Applying all pending migrations environment=development
2024-05-20T06:56:42.850231Z INFO app: sea_orm_migration::migrator: Applying migration 'm20220101_000001_users' environment=development
2024-05-20T06:56:42.864095Z INFO app: sea_orm_migration::migrator: Migration 'm20220101_000001_users' has been applied environment=development
2024-05-20T06:56:42.865799Z INFO app: sea_orm_migration::migrator: Applying migration 'm20231103_114510_notes' environment=development
2024-05-20T06:56:42.873653Z INFO app: sea_orm_migration::migrator: Migration 'm20231103_114510_notes' has been applied environment=development
2024-05-20T06:56:42.875645Z INFO app: loco_rs::boot: initializers loaded initializers="" environment=development
2024-05-20T06:56:42.906072Z INFO app: loco_rs::controller::app_routes: [GET] /api/_ping environment=development
2024-05-20T06:56:42.906176Z INFO app: loco_rs::controller::app_routes: [GET] /api/_health environment=development
2024-05-20T06:56:42.906264Z INFO app: loco_rs::controller::app_routes: [GET] /api/notes environment=development
2024-05-20T06:56:42.906335Z INFO app: loco_rs::controller::app_routes: [POST] /api/notes environment=development
2024-05-20T06:56:42.906414Z INFO app: loco_rs::controller::app_routes: [GET] /api/notes/:id environment=development
2024-05-20T06:56:42.906501Z INFO app: loco_rs::controller::app_routes: [DELETE] /api/notes/:id environment=development
2024-05-20T06:56:42.906558Z INFO app: loco_rs::controller::app_routes: [POST] /api/notes/:id environment=development
2024-05-20T06:56:42.906609Z INFO app: loco_rs::controller::app_routes: [POST] /api/auth/register environment=development
2024-05-20T06:56:42.906680Z INFO app: loco_rs::controller::app_routes: [POST] /api/auth/verify environment=development
2024-05-20T06:56:42.906753Z INFO app: loco_rs::controller::app_routes: [POST] /api/auth/login environment=development
2024-05-20T06:56:42.906838Z INFO app: loco_rs::controller::app_routes: [POST] /api/auth/forgot environment=development
2024-05-20T06:56:42.906931Z INFO app: loco_rs::controller::app_routes: [POST] /api/auth/reset environment=development
2024-05-20T06:56:42.907012Z INFO app: loco_rs::controller::app_routes: [GET] /api/user/current environment=development
2024-05-20T06:56:42.907309Z INFO app: loco_rs::controller::app_routes: [Middleware] Adding limit payload data="5mb" environment=development
2024-05-20T06:56:42.907440Z INFO app: loco_rs::controller::app_routes: [Middleware] Adding log trace id environment=development
2024-05-20T06:56:42.907714Z INFO app: loco_rs::controller::app_routes: [Middleware] Adding cors environment=development
2024-05-20T06:56:42.907788Z INFO app: loco_rs::controller::app_routes: [Middleware] Adding etag layer environment=development

โ–„ โ–€
โ–€ โ–„
โ–„ โ–€ โ–„ โ–„ โ–„โ–€
โ–„ โ–€โ–„โ–„
โ–„ โ–€ โ–€ โ–€โ–„โ–€โ–ˆโ–„
โ–€โ–ˆโ–„
โ–„โ–„โ–„โ–„โ–„โ–„โ–„ โ–„โ–„โ–„โ–„โ–„โ–„โ–„โ–„โ–„ โ–„โ–„โ–„โ–„โ–„โ–„โ–„โ–„โ–„โ–„โ–„ โ–„โ–„โ–„โ–„โ–„โ–„โ–„โ–„โ–„ โ–€โ–€โ–ˆ
โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ โ–ˆโ–ˆโ–ˆ โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ โ–ˆโ–ˆโ–ˆ โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ โ–ˆโ–ˆโ–ˆ โ–€โ–ˆ
โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ โ–ˆโ–ˆโ–ˆ โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ โ–€โ–€โ–€ โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ โ–ˆโ–ˆโ–ˆ โ–„โ–ˆโ–„
โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ โ–ˆโ–ˆโ–ˆ โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ โ–ˆโ–ˆโ–ˆ โ–ˆโ–ˆโ–ˆโ–ˆโ–„
โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ โ–ˆโ–ˆโ–ˆ โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ โ–„โ–„โ–„ โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ โ–ˆโ–ˆโ–ˆ โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ
โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ โ–ˆโ–ˆโ–ˆ โ–ˆโ–ˆโ–ˆโ–ˆ โ–ˆโ–ˆโ–ˆ โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ โ–ˆโ–ˆโ–ˆ โ–ˆโ–ˆโ–ˆโ–ˆโ–€
โ–€โ–€โ–€โ–ˆโ–ˆโ–„ โ–€โ–€โ–€โ–€โ–€โ–€โ–€โ–€โ–€โ–€ โ–€โ–€โ–€โ–€โ–€โ–€โ–€โ–€โ–€โ–€ โ–€โ–€โ–€โ–€โ–€โ–€โ–€โ–€โ–€โ–€ โ–ˆโ–ˆโ–€
โ–€โ–€โ–€โ–€โ–€โ–€โ–€โ–€โ–€โ–€โ–€โ–€โ–€โ–€โ–€โ–€โ–€โ–€โ–€โ–€โ–€โ–€โ–€โ–€โ–€โ–€โ–€โ–€โ–€โ–€โ–€โ–€โ–€โ–€โ–€โ–€โ–€โ–€โ–€
https://loco.rs

environment: development
database: automigrate
logger: debug
compilation: debug
modes: server

listening on [::]:3000

From the log messages printed above, we saw:

  • Database migrations have been applied
  • All available REST API

To check if the application listen for requests:

$ curl --location 'http://localhost:3000/api/_ping'

{"ok":true}

User Managementโ€‹

The starter template comes with a basic user management module.

Registrationโ€‹

It is a common practice to send a verification email to the provided email. However, that would requires a SMTP server and this is not the focus of this blog post. So, I will skip the email verification:

loco_starter/src/controllers/auth.rs
#[debug_handler]
async fn register(
State(ctx): State<AppContext>,
Json(params): Json<RegisterParams>,
) -> Result<Response> {
let res = users::Model::create_with_password(&ctx.db, &params).await;

let user = match res {
Ok(user) => user,
Err(err) => {
tracing::info!(
message = err.to_string(),
user_email = &params.email,
"could not register user",
);
return format::json(());
}
};

+ // Skip email verification, all new registrations are considered verified
+ let _user = user
+ .into_active_model()
+ .verified(&ctx.db)
+ .await?;

+ // Skip sending verification email as we don't have a mail server
+ /*
let user = user
.into_active_model()
.set_email_verification_sent(&ctx.db)
.await?;

AuthMailer::send_welcome(&ctx, &user).await?;
+ */

format::json(())
}

Compile and run the application, then register a new user account:

$ curl --location 'http://localhost:3000/api/auth/register' \
--data-raw '{
"name": "Billy",
"email": "cwchan.billy@gmail.com",
"password": "password"
}'

null

Loginโ€‹

You should see there is a new row of user in the database.

Next, we login the user account with the corresponding email and password:

$ curl --location 'http://localhost:3000/api/auth/login' \
--data-raw '{
"email": "cwchan.billy@gmail.com",
"password": "password"
}'

{
"token": "eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzUxMiJ9.eyJwaWQiOiIxMWQwMWFmMy02ZmUyLTQ0ZjMtODlmMC1jMDJjZWMzOTc0MWQiLCJleHAiOjE3MTY3OTU3NjR9.i1OElxy33rkorkxk6QpTG1Kg4_Q8O0jqBJ2i82nltkcQYZsLmSSnrxtdtlfdvV0ccJ3hQA3JoY9L13cjz2uSCw",
"pid": "11d01af3-6fe2-44f3-89f0-c02cec39741d",
"name": "Billy",
"is_verified": true
}

Authenticationโ€‹

The JWT token above will be used in user authentication. You must set the Authorization header to access any REST endpoint that requires user login.

For example, fetching the user info of the current user:

$ curl --location 'http://localhost:3000/api/user/current' \
--header 'Authorization: Bearer eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzUxMiJ9.eyJwaWQiOiIxMWQwMWFmMy02ZmUyLTQ0ZjMtODlmMC1jMDJjZWMzOTc0MWQiLCJleHAiOjE3MTY3OTU3NjR9.i1OElxy33rkorkxk6QpTG1Kg4_Q8O0jqBJ2i82nltkcQYZsLmSSnrxtdtlfdvV0ccJ3hQA3JoY9L13cjz2uSCw'

{
"pid":"11d01af3-6fe2-44f3-89f0-c02cec39741d",
"name":"Billy",
"email":"cwchan.billy@gmail.com"
}

Handling REST Requestsโ€‹

The starter application comes with a notes controller for the notes table.

Create Notesโ€‹

$ curl --location 'http://localhost:3000/api/notes' \
--header 'Authorization: Bearer eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzUxMiJ9.eyJwaWQiOiIxMWQwMWFmMy02ZmUyLTQ0ZjMtODlmMC1jMDJjZWMzOTc0MWQiLCJleHAiOjE3MTY3OTU3NjR9.i1OElxy33rkorkxk6QpTG1Kg4_Q8O0jqBJ2i82nltkcQYZsLmSSnrxtdtlfdvV0ccJ3hQA3JoY9L13cjz2uSCw' \
--data '{
"title": "Getting Started with Loco & SeaORM",
"content": "In this tutorial, we would create an REST notepad backend starting from scratch and adding a new REST endpoint to handle file uploads."
}'

{
"created_at": "2024-05-20T08:43:45.408449",
"updated_at": "2024-05-20T08:43:45.408449",
"id": 1,
"title": "Getting Started with Loco & SeaORM",
"content": "In this tutorial, we would create an REST notepad backend starting from scratch and adding a new REST endpoint to handle file uploads."
}

List Notesโ€‹

$ curl --location 'http://localhost:3000/api/notes' \
--header 'Authorization: Bearer eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzUxMiJ9.eyJwaWQiOiIxMWQwMWFmMy02ZmUyLTQ0ZjMtODlmMC1jMDJjZWMzOTc0MWQiLCJleHAiOjE3MTY3OTU3NjR9.i1OElxy33rkorkxk6QpTG1Kg4_Q8O0jqBJ2i82nltkcQYZsLmSSnrxtdtlfdvV0ccJ3hQA3JoY9L13cjz2uSCw'

[
{
"created_at": "2024-05-20T08:43:45.408449",
"updated_at": "2024-05-20T08:43:45.408449",
"id": 1,
"title": "Getting Started with Loco & SeaORM",
"content": "In this tutorial, we would create an REST notepad backend starting from scratch and adding a new REST endpoint to handle file uploads."
},
{
"created_at": "2024-05-20T08:45:38.973130",
"updated_at": "2024-05-20T08:45:38.973130",
"id": 2,
"title": "Introducing SeaORM X",
"content": "SeaORM X is built on top of SeaORM with support for SQL Server"
}
]

Get Notesโ€‹

$ curl --location 'http://localhost:3000/api/notes/2' \
--header 'Authorization: Bearer eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzUxMiJ9.eyJwaWQiOiIxMWQwMWFmMy02ZmUyLTQ0ZjMtODlmMC1jMDJjZWMzOTc0MWQiLCJleHAiOjE3MTY3OTU3NjR9.i1OElxy33rkorkxk6QpTG1Kg4_Q8O0jqBJ2i82nltkcQYZsLmSSnrxtdtlfdvV0ccJ3hQA3JoY9L13cjz2uSCw'

{
"created_at": "2024-05-20T08:45:38.973130",
"updated_at": "2024-05-20T08:45:38.973130",
"id": 2,
"title": "Introducing SeaORM X",
"content": "SeaORM X is built on top of SeaORM with support for SQL Server"
}

Handling File Uploadsโ€‹

Next, we will add a file upload feature where user can upload files that is related to the notes.

File Table Migrationโ€‹

Create a migration file for the new files table. Each row of files reference a specific notes in the database.

loco_starter/migration/src/m20240520_173001_files.rs
use sea_orm_migration::{prelude::*, schema::*};

use super::m20231103_114510_notes::Notes;

#[derive(DeriveMigrationName)]
pub struct Migration;

#[async_trait::async_trait]
impl MigrationTrait for Migration {
async fn up(&self, manager: &SchemaManager) -> Result<(), DbErr> {
manager
.create_table(
table_auto(Files::Table)
.col(pk_auto(Files::Id))
.col(integer(Files::NotesId))
.col(string(Files::FilePath))
.foreign_key(
ForeignKey::create()
.name("FK_files_notes_id")
.from(Files::Table, Files::NotesId)
.to(Notes::Table, Notes::Id),
)
.to_owned(),
)
.await
}

async fn down(&self, manager: &SchemaManager) -> Result<(), DbErr> {
manager
.drop_table(Table::drop().table(Files::Table).to_owned())
.await
}
}

#[derive(DeriveIden)]
pub enum Files {
Table,
Id,
NotesId,
FilePath,
}

Then, we need to enable the new migration.

loco_starter/migration/src/lib.rs
#![allow(elided_lifetimes_in_paths)]
#![allow(clippy::wildcard_imports)]
pub use sea_orm_migration::prelude::*;

mod m20220101_000001_users;
mod m20231103_114510_notes;
+ mod m20240520_173001_files;

pub struct Migrator;

#[async_trait::async_trait]
impl MigratorTrait for Migrator {
fn migrations() -> Vec<Box<dyn MigrationTrait>> {
vec![
Box::new(m20220101_000001_users::Migration),
Box::new(m20231103_114510_notes::Migration),
+ Box::new(m20240520_173001_files::Migration),
]
}
}

Compile and start the application, it should run our new migration on startup.

$ cargo loco start

...
2024-05-20T09:39:59.607525Z INFO app: loco_rs::db: auto migrating environment=development
2024-05-20T09:39:59.611997Z INFO app: sea_orm_migration::migrator: Applying all pending migrations environment=development
2024-05-20T09:39:59.621699Z INFO app: sea_orm_migration::migrator: Applying migration 'm20240520_173001_files' environment=development
2024-05-20T09:39:59.643886Z INFO app: sea_orm_migration::migrator: Migration 'm20240520_173001_files' has been applied environment=development
...

File Model Definitionโ€‹

Define files entity model.

loco_starter/src/models/_entities/files.rs
use sea_orm::entity::prelude::*;
use serde::{Deserialize, Serialize};

#[derive(Clone, Debug, PartialEq, DeriveEntityModel, Eq, Serialize, Deserialize)]
#[sea_orm(table_name = "files")]
pub struct Model {
pub created_at: DateTime,
pub updated_at: DateTime,
#[sea_orm(primary_key)]
pub id: i32,
pub notes_id: i32,
pub file_path: String,
}

#[derive(Copy, Clone, Debug, EnumIter, DeriveRelation)]
pub enum Relation {
#[sea_orm(
belongs_to = "super::notes::Entity",
from = "Column::NotesId",
to = "super::notes::Column::Id"
)]
Notes,
}

impl Related<super::notes::Entity> for Entity {
fn to() -> RelationDef {
Relation::Notes.def()
}
}

Implement the ActiveModelBehavior in the parent module.

loco_starter/src/models/files.rs
use sea_orm::entity::prelude::*;

use super::_entities::files::ActiveModel;

impl ActiveModelBehavior for ActiveModel {
// extend activemodel below (keep comment for generators)
}

File Controllerโ€‹

Controller is where we handle the file uploading, listing and viewing.

Upload Fileโ€‹

The following upload handler allows multiple files to be uploaded in a single POST request.

loco_starter/src/controllers/files.rs
#[debug_handler]
pub async fn upload(
_auth: auth::JWT,
Path(notes_id): Path<i32>,
State(ctx): State<AppContext>,
mut multipart: Multipart,
) -> Result<Response> {
// Collect all uploaded files
let mut files = Vec::new();

// Iterate all files in the POST body
while let Some(field) = multipart.next_field().await.map_err(|err| {
tracing::error!(error = ?err,"could not readd multipart");
Error::BadRequest("could not readd multipart".into())
})? {
// Get the file name
let file_name = match field.file_name() {
Some(file_name) => file_name.to_string(),
_ => return Err(Error::BadRequest("file name not found".into())),
};

// Get the file content as bytes
let content = field.bytes().await.map_err(|err| {
tracing::error!(error = ?err,"could not readd bytes");
Error::BadRequest("could not readd bytes".into())
})?;

// Create a folder to store the uploaded file
let now = chrono::offset::Local::now()
.format("%Y%m%d_%H%M%S")
.to_string();
let uuid = uuid::Uuid::new_v4().to_string();
let folder = format!("{now}_{uuid}");
let upload_folder = PathBuf::from(UPLOAD_DIR).join(&folder);
fs::create_dir_all(&upload_folder).await?;

// Write the file into the newly created folder
let path = upload_folder.join(file_name);
let mut f = fs::OpenOptions::new()
.create_new(true)
.write(true)
.open(&path)
.await?;
f.write_all(&content).await?;
f.flush().await?;

// Record the file upload in database
let file = files::ActiveModel {
notes_id: ActiveValue::Set(notes_id),
file_path: ActiveValue::Set(
path.strip_prefix(UPLOAD_DIR)
.unwrap()
.to_str()
.unwrap()
.to_string(),
),
..Default::default()
}
.insert(&ctx.db)
.await?;

files.push(file);
}

format::json(files)
}

Try uploading multiple files in a single POST request:

All uploaded files are saved into the uploads directory:

List Fileโ€‹

List all files that are related to a specific notes_id.

loco_starter/src/controllers/files.rs
#[debug_handler]
pub async fn list(
_auth: auth::JWT,
Path(notes_id): Path<i32>,
State(ctx): State<AppContext>,
) -> Result<Response> {
// Fetch all files uploaded for a specific notes
let files = files::Entity::find()
.filter(files::Column::NotesId.eq(notes_id))
.order_by_asc(files::Column::Id)
.all(&ctx.db)
.await?;

format::json(files)
}

View Fileโ€‹

View a specific files.

loco_starter/src/controllers/files.rs
#[debug_handler]
pub async fn view(
_auth: auth::JWT,
Path(files_id): Path<i32>,
State(ctx): State<AppContext>,
) -> Result<Response> {
// Fetch the file info from database
let file = files::Entity::find_by_id(files_id)
.one(&ctx.db)
.await?
.expect("File not found");

// Stream the file
let file = fs::File::open(format!("{UPLOAD_DIR}/{}", file.file_path)).await?;
let stream = ReaderStream::new(file);
let body = Body::from_stream(stream);

Ok(format::render().response().body(body)?)
}

File Controller Routesโ€‹

Add our newly defined files handler to the application routes.

loco_starter/src/controllers/files.rs
pub fn routes() -> Routes {
// Bind the routes
Routes::new()
.prefix("files")
.add("/upload/:notes_id", post(upload))
.add("/list/:notes_id", get(list))
.add("/view/:files_id", get(view))
}
loco_starter/src/app.rs
pub struct App;

#[async_trait]
impl Hooks for App {
// ...

fn routes(_ctx: &AppContext) -> AppRoutes {
AppRoutes::with_default_routes()
.prefix("/api")
.add_route(controllers::notes::routes())
.add_route(controllers::auth::routes())
.add_route(controllers::user::routes())
+ .add_route(controllers::files::routes())
}

// ...
}

Extra Rust Dependenciesโ€‹

Remember to enable multipart in axum and add tokio-util dependency.

loco_starter/Cargo.toml
- axum = "0.7.1"
+ axum = { version = "0.7.1", features = ["multipart"] }

+ tokio-util = "0.7.11"

SQL Server Supportโ€‹

SQL Server for SeaORM is now available as a closed beta. If you are interested, please signup here.

Migrating from sea-orm to sea-orm-x is straightforward with two simple steps. First, update the existing sea-orm dependency to sea-orm-x and enable the sqlz-mssql feature. Note that you might need to patch SeaORM dependency for the upstream dependencies.

Cargo.toml
sea-orm = { path = "<SEA_ORM_X_ROOT>/sea-orm-x", features = ["runtime-async-std-rustls", "sqlz-mssql"] }
sea-orm-migration = { path = "<SEA_ORM_X_ROOT>/sea-orm-x/sea-orm-migration" }

# Patch SeaORM dependency for the upstream dependencies
[patch.crates-io]
sea-orm = { path = "<SEA_ORM_X_ROOT>/sea-orm-x" }
sea-orm-migration = { path = "<SEA_ORM_X_ROOT>/sea-orm-x/sea-orm-migration" }

Second, update the connection string to connect to the MSSQL database.

# If the schema is `dbo`, simply write:
mssql://username:password@host/database

# Or, specify the schema name by providing an extra `currentSchema` query param.
mssql://username:password@host/database?currentSchema=my_schema

# You can trust peer certificate by providing an extra trustCertificate query param.
mssql://username:password@host/database?trustCertificate=true

SeaORM X has full Loco support and integrate seamlessly with many web frameworks:

  • Actix
  • Axum
  • Async GraphQL
  • jsonrpsee
  • Loco
  • Poem
  • Salvo
  • Tonic

Happy Coding!

The rainbow bridge between sync and async Rust

ยท 9 min read
Chris Tsang
SeaQL Team

This story stems from the saying "What Color is Your Function?" as a criticism to the async implementation of common programming languages. Well, Rust also falls into the category of "colored functions". So in this blog post, let's see how we can design systems to effectively combine sync and async code.

Rainbow bridge is a reference to the bridge in Thor that teleports you between different realms - a perfect analogy!

Backgroundโ€‹

Sync code can be blocking IO, or expensive computation. Async code is usually network IO where you'd wait for results.

In both cases, we want to maximize concurrency, such that the program can make full use of the CPU instead of sitting there idle. A common approach is message passing, where we package tasks and send them to different workers for execution.

Sync -> Syncโ€‹

Let's start with the classic example, pure sync code. There exists std::sync::mpsc in the standard library, so let's take a look.

use std::sync::mpsc::channel;

// create an unbounded channel
let (sender, receiver) = channel();

// never blocks
sender.send("Hello".to_string()).unwrap();

let handle = std::thread::spawn(move|| {
// wait until there is a message
let message = receiver.recv().unwrap();
println!("{message}");
});

handle.join().unwrap();
println!("Bye");

Prints (Playground):

Hello
Bye

Now, we'll make a more elaborate example: a program that spawns a number of worker threads to perform some 'expensive' computation. The main thread would dispatch the tasks to those threads and in turn collect the results via another channel.

โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”    tasks    โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”   result
โ”‚ โ•žโ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•ก worker thread 1 โ•žโ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•— โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
โ”‚ main thread โ”‚ โ”œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ค โ• โ•โ•โ•โ•โ•ก main thread โ”‚
โ”‚ โ•žโ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•ก worker thread 2 โ•žโ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ• โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜
โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜ โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜

First, setup the channels.

let (result, collector) = channel(); // result
let mut senders = Vec::new();
for _ in 0..THREADS {
let (sender, receiver) = channel(); // tasks
senders.push(sender);
let result = result.clone();
std::thread::spawn(move || worker(receiver, result));
}

The worker thread looks like:

fn worker(receiver: Receiver<Task>, sender: Sender<Done>) {
while let Ok(task) = receiver.recv() {
let result = process(task);
sender.send(result).unwrap();
}
}

Then, dispatch tasks.

for c in 0..TASKS {
let task = some_random_task();
senders[c % THREADS].send(task).unwrap();
}

Finally, we can collect results.

for _ in 0..TASKS {
let result = collector.recv().unwrap();
println!("{result:?}");
}

Full source code can be found here.

Async -> Asyncโ€‹

Next, we'll migrate to async land. Using tokio::sync::mpsc, it's very similar to the above example, except every operation is async and thus imposes additional restrictions to lifetimes. (The trick is, just move / clone. Don't borrow)

tokio's unbounded_channel is the equivalent to std's channel. Otherwise it's very similar. The spawn method takes in a Future; since the worker needs to take in the channels, we construct an async closure with async move {}.

stdtokio
(unbounded) channelunbounded_channel
sync_channel(bounded) channel
let (result, mut collector) = unbounded_channel();
let mut senders = Vec::new();
for _ in 0..WORKERS {
let (sender, mut receiver) = unbounded_channel();
senders.push(sender);
let result = result.clone();
tokio::task::spawn(async move {
while let Some(task) = receiver.recv().await {
result.send(process(task).await).unwrap();
}
});
}
std::mem::drop(result); // <-- ?

Why do we need to drop the result sender? This is one of the foot gun: tokio would swallow panics originated within the task, and so if that happened, the program would never exit. By dropping the last copy of result in scope, the channel would automatically close after all tasks exit, which in turn would triggle up to our collector.

The rest is almost the same.

for (i, task) in tasks.iter().enumerate() {
senders[i % WORKERS].send(task.clone()).unwrap();
}
std::mem::drop(senders);

for _ in 0..tasks.len() {
let result = collector.recv().await.unwrap();
println!("{result:?}");
}

Full source code can be found here.

Flume mpmcโ€‹

mpmc - multi producer, multi consumerโ€‹

The previous examples have a flaw: we have to spawn multiple mpsc channels to send tasks, which is:

  1. clumsy. we need to keep a list of senders
  2. not the most efficient. is round-robin the best way of distributing tasks? some of the workers may remain idle

Here is the ideal setup:

                      tasks   โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”   result
โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ” โ•”โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•ก worker thread 1 โ•žโ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•— โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
โ”‚ main thread โ•žโ•โ•โ•โ•ฃ โ”œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ค โ• โ•โ•โ•โ•ก main thread โ”‚
โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜ โ•šโ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•ก worker thread 2 โ•žโ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ• โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜
โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜

Let's rewrite our example using Flume. But first, know the mapping between tokio and flume:

TokioFlume
unbounded_channelunbounded (channel)
(bounded) channelbounded (channel)
sendsend
recvrecv_async

In tokio, the method is exclusive: async fn recv(&mut self); in flume, the method is fn recv_async(&self) -> RecvFut. The type signature already told you the distinction between mpsc vs mpmc! It is wrong to use the blocking recv method in async context in flume, but sadly the compiler would not warn you about it.

The channel setup is now slightly simpler:

let (sender, receiver) = unbounded(); // task
let (result, collector) = unbounded(); // result

for _ in 0..WORKERS {
let receiver = receiver.clone();
let result = result.clone();
tokio::task::spawn(async move {
while let Ok(task) = receiver.recv_async().await {
result.send(process(task).await).unwrap();
}
});
}

We no longer have to dispatch tasks ourselves. All workers share the same task queue, and thus workers would fetch the next task as soon as the previous one is finished - effectively load balance among themselves!

for task in &tasks {
sender.send(task.clone()).unwrap();
}

for _ in 0..tasks.len() {
let result = collector.recv_async().await.unwrap();
println!("{result:?}");
}

Full source code can be found here.

Sync -> Asyncโ€‹

In the final example, let's consider a program that is mostly sync, but has a few async operations that we want to handle in a background thread.

In the example below, our blocking operation is 'reading from stdin' from the main thread. And we send those lines to an async thread to handle.

โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”           โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
โ”‚ main thread โ•žโ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•ก async thread โ”‚
โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜ โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜

It follows the usual 3 steps:

  1. create a flume channel
  2. pass the receiver end to a worker thread
  3. send tasks over the channel
fn main() -> Result<()> {
let (sender, receiver) = unbounded(); // flume channel

std::thread::spawn(move || {
// this runtime is single-threaded
let rt = tokio::runtime::Builder::new_current_thread().enable_all().build().unwrap();
rt.block_on(handler(receiver))
});

loop {
let mut line = String::new();
// this blocks the current thread until there is a new line
match std::io::stdin().read_line(&mut line) {
Ok(0) => break, // this means stdin is closed
Ok(_) => (),
Err(e) => panic!("{e:?}"),
}
sender.send(line)?;
}

Ok(())
}

This is the handler:

async fn handler(receiver: Receiver<String>) -> Result<()> {
while let Ok(line) = receiver.recv_async().await {
process(line).await?;
}
Ok(())
}

It doesn't look much different from the async -> async example, the only difference is one side is sync! Full source code can be found here.

Graceful shutdownโ€‹

The above code has a problem: we never know whether a line has been processed. If the program has an exit mechanism from handling sigint, there is a possibility of exiting before all the lines has been processed.

Let's see how we can shutdown properly.

let handle = std::thread::spawn(..);

// running is an AtomicBool
while running.load(Ordering::Acquire) {
let line = read_line_from_stdin();
sender.send(line)?;
}

std::mem::drop(sender);
handle.join().unwrap().unwrap();

The shutdown sequence has 3 steps:

  1. we first obtain the JoinHandle to the thread
  2. we drop all copies of sender, effectively closing the channel
  3. in the worker thread, receiver.recv_async() would result in an error, as stated in the docs

    Asynchronously receive a value from the channel, returning an error if all senders have been dropped.

  4. the worker thread finishes, joining the main thread

Async -> Syncโ€‹

The other way around is equally simple, as illustrated in SeaStreamer's example.

Conclusionโ€‹

syncasync
to spawn workerstd::thread::spawntokio::task::spawn
concurrencymulti-threadedcan be multi-threaded or single-threaded
worker isFnOnceFuture
send message withsendsend
receive message withrecvrecv_async
waiting for messagesblockingyield to runtime

In this article we discussed:

  1. Multi-threaded parallelism in sync realm
  2. Concurrency in async realm - with tokio and flume
  3. Bridging sync and async code with flume

Now you already learnt the powers of flume, but there is more!

In the next episode, hopefully we will get to discuss other interesting features of flume - bounded channels and 'rendezvous channels'.

Rustacean Sticker Pack ๐Ÿฆ€โ€‹

The Rustacean Sticker Pack is the perfect way to express your passion for Rust. Our stickers are made with a premium water-resistant vinyl with a unique matte finish. Stick them on your laptop, notebook, or any gadget to show off your love for Rust!

Moreover, all proceeds contributes directly to the ongoing development of SeaQL projects.

Sticker Pack Contents:

  • Logo of SeaQL projects: SeaQL, SeaORM, SeaQuery, Seaography, FireDBG
  • Mascot of SeaQL: Terres the Hermit Crab
  • Mascot of Rust: Ferris the Crab
  • The Rustacean word

Support SeaQL and get a Sticker Pack!

Rustacean Sticker Pack by SeaQL