Skip to main content

· 8 min read
Billy Chan

In this tutorial, we would add a GraphQL endpoint with Seaography based on our Loco starter application. Read our first tutorial of the series, Getting Started with Loco & SeaORM, if you haven't.

The full source code can be found here.

What is Seaography

Seaography is a GraphQL framework for building GraphQL resolvers using SeaORM entities. It ships with a CLI tool that can generate ready-to-compile Rust GraphQL servers from existing MySQL, Postgres and SQLite databases.

Adding Dependency

Modify Cargo.toml and add a few more dependencies: seaography, async-graphql, async-graphql-axum and lazy_static.

loco_seaography/Cargo.toml
seaography = { version = "1.0.0-rc.4", features = ["with-decimal", "with-chrono"] }
async-graphql = { version = "7.0", features = ["decimal", "chrono", "dataloader", "dynamic-schema"] }
async-graphql-axum = { version = "7.0" }
lazy_static = { version = "1.4" }
tower-service = { version = "0.3" }

Setting up SeaORM Entities for Seaography

Seaography Entities are basically SeaORM Entities with some additions. They are fully compatible with SeaORM.

You can generate Seaography Entities by using sea-orm-cli with the extra --seaography flag.

sea-orm-cli generate entity -o src/models/_entities -u postgres://loco:loco@localhost:5432/loco_seaography_development --seaography
loco_seaography/src/models/_entities/notes.rs
use sea_orm::entity::prelude::*;
use serde::{Serialize, Deserialize};

#[derive(Clone, Debug, PartialEq, DeriveEntityModel, Eq, Serialize, Deserialize)]
#[sea_orm(table_name = "notes")]
pub struct Model {
pub created_at: DateTime,
pub updated_at: DateTime,
#[sea_orm(primary_key)]
pub id: i32,
pub title: Option<String>,
pub content: Option<String>,
}

#[derive(Copy, Clone, Debug, EnumIter, DeriveRelation)]
pub enum Relation {
#[sea_orm(has_many = "super::files::Entity")]
Files,
}

impl Related<super::files::Entity> for Entity {
fn to() -> RelationDef {
Relation::Files.def()
}
}

+ // Defining `RelatedEntity` to relate one entity with another
+ #[derive(Copy, Clone, Debug, EnumIter, DeriveRelatedEntity)]
+ pub enum RelatedEntity {
+ #[sea_orm(entity = "super::files::Entity")]
+ Files,
+ }

We can see that a new enum RelatedEntity is generated in the Entity files. This helps Seaography to locate the related Entities for making relational queries.

Implementing GraphQL Query Root

We have finished setting up SeaORM entity for Seaography. Now, we implement the query root of Seaography where we bridge SeaORM and Async GraphQL with the help of Seaography.

loco_seaography/src/graphql/query_root.rs
use async_graphql::dynamic::*;
use sea_orm::DatabaseConnection;
use seaography::{Builder, BuilderContext};

use crate::models::_entities::*;

lazy_static::lazy_static! { static ref CONTEXT: BuilderContext = BuilderContext::default(); }

pub fn schema(
database: DatabaseConnection,
depth: usize,
complexity: usize,
) -> Result<Schema, SchemaError> {
// Builder of Seaography query root
let mut builder = Builder::new(&CONTEXT, database.clone());
// Register SeaORM entities
seaography::register_entities!(
builder,
// List all models we want to include in the GraphQL endpoint here
[files, notes, users]
);
// Configure async GraphQL limits
let schema = builder
.schema_builder()
// The depth is the number of nesting levels of the field
.limit_depth(depth)
// The complexity is the number of fields in the query
.limit_complexity(complexity);
// Finish up with including SeaORM database connection
schema.data(database).finish()
}

Writing Loco Controller to Handle GraphQL Endpoint

For convenience we use the built-in GraphQL playground UI in async-graphql to test the GraphQL endpoint. And handle the GraphQL request with async_graphql_axum and Seaography.

loco_seaography/src/controllers/graphql.rs
use async_graphql::http::{playground_source, GraphQLPlaygroundConfig};
use axum::{body::Body, extract::Request};
use loco_rs::prelude::*;
use tower_service::Service;

use crate::graphql::query_root;

// GraphQL playground UI
async fn graphql_playground() -> Result<Response> {
// The `GraphQLPlaygroundConfig` take one parameter
// which is the URL of the GraphQL handler: `/api/graphql`
let res = playground_source(GraphQLPlaygroundConfig::new("/api/graphql"));

Ok(Response::new(res.into()))
}

async fn graphql_handler(
State(ctx): State<AppContext>,
req: Request<Body>,
) -> Result<Response> {
const DEPTH: usize = 10;
const COMPLEXITY: usize = 100;
// Construct the the GraphQL query root
let schema = query_root::schema(ctx.db.clone(), DEPTH, COMPLEXITY).unwrap();
// GraphQL handler
let mut graphql_handler = async_graphql_axum::GraphQL::new(schema);
// Execute GraphQL request and fetch the results
let res = graphql_handler.call(req).await.unwrap();

Ok(res)
}

pub fn routes() -> Routes {
// Define route
Routes::new()
// We put all GraphQL route behind `graphql` prefix
.prefix("graphql")
// GraphQL playground page is a GET request
.add("/", get(graphql_playground))
// GraphQL handler is a POST request
.add("/", post(graphql_handler))
}

Opening GraphQL Playground

Compile and run the Loco application, then visit http://localhost:3000/api/graphql.

$ cargo run start

Finished `dev` profile [unoptimized + debuginfo] target(s) in 0.60s
Running `target/debug/loco_seaography-cli start`
2024-06-24T08:04:52.173924Z INFO app: loco_rs::config: loading environment from selected_path="config/development.yaml" environment=development
2024-06-24T08:04:52.180447Z WARN app: loco_rs::boot: pretty backtraces are enabled (this is great for development but has a runtime cost for production. disable with `logger.pretty_backtrace` in your config yaml) environment=development
2024-06-24T08:04:52.272392Z INFO app: loco_rs::db: auto migrating environment=development
2024-06-24T08:04:52.275198Z INFO app: sea_orm_migration::migrator: Applying all pending migrations environment=development
2024-06-24T08:04:52.280720Z INFO app: sea_orm_migration::migrator: No pending migrations environment=development
2024-06-24T08:04:52.281280Z INFO app: loco_rs::boot: initializers loaded initializers="" environment=development
2024-06-24T08:04:52.308827Z INFO app: loco_rs::controller::app_routes: [GET] /api/_ping environment=development
2024-06-24T08:04:52.308936Z INFO app: loco_rs::controller::app_routes: [GET] /api/_health environment=development
2024-06-24T08:04:52.309021Z INFO app: loco_rs::controller::app_routes: [GET] /api/notes environment=development
2024-06-24T08:04:52.309088Z INFO app: loco_rs::controller::app_routes: [POST] /api/notes environment=development
2024-06-24T08:04:52.309158Z INFO app: loco_rs::controller::app_routes: [GET] /api/notes/:id environment=development
2024-06-24T08:04:52.309234Z INFO app: loco_rs::controller::app_routes: [DELETE] /api/notes/:id environment=development
2024-06-24T08:04:52.309286Z INFO app: loco_rs::controller::app_routes: [POST] /api/notes/:id environment=development
2024-06-24T08:04:52.309334Z INFO app: loco_rs::controller::app_routes: [POST] /api/auth/register environment=development
2024-06-24T08:04:52.309401Z INFO app: loco_rs::controller::app_routes: [POST] /api/auth/verify environment=development
2024-06-24T08:04:52.309471Z INFO app: loco_rs::controller::app_routes: [POST] /api/auth/login environment=development
2024-06-24T08:04:52.309572Z INFO app: loco_rs::controller::app_routes: [POST] /api/auth/forgot environment=development
2024-06-24T08:04:52.309662Z INFO app: loco_rs::controller::app_routes: [POST] /api/auth/reset environment=development
2024-06-24T08:04:52.309752Z INFO app: loco_rs::controller::app_routes: [GET] /api/user/current environment=development
2024-06-24T08:04:52.309827Z INFO app: loco_rs::controller::app_routes: [POST] /api/files/upload/:notes_id environment=development
2024-06-24T08:04:52.309910Z INFO app: loco_rs::controller::app_routes: [GET] /api/files/list/:notes_id environment=development
2024-06-24T08:04:52.309997Z INFO app: loco_rs::controller::app_routes: [GET] /api/files/view/:files_id environment=development
2024-06-24T08:04:52.310088Z INFO app: loco_rs::controller::app_routes: [GET] /api/graphql environment=development
2024-06-24T08:04:52.310172Z INFO app: loco_rs::controller::app_routes: [POST] /api/graphql environment=development
2024-06-24T08:04:52.310469Z INFO app: loco_rs::controller::app_routes: [Middleware] Adding limit payload data="5mb" environment=development
2024-06-24T08:04:52.310615Z INFO app: loco_rs::controller::app_routes: [Middleware] Adding log trace id environment=development
2024-06-24T08:04:52.310934Z INFO app: loco_rs::controller::app_routes: [Middleware] Adding cors environment=development
2024-06-24T08:04:52.311008Z INFO app: loco_rs::controller::app_routes: [Middleware] Adding etag layer environment=development

▄ ▀
▀ ▄
▄ ▀ ▄ ▄ ▄▀
▄ ▀▄▄
▄ ▀ ▀ ▀▄▀█▄
▀█▄
▄▄▄▄▄▄▄ ▄▄▄▄▄▄▄▄▄ ▄▄▄▄▄▄▄▄▄▄▄ ▄▄▄▄▄▄▄▄▄ ▀▀█
██████ █████ ███ █████ ███ █████ ███ ▀█
██████ █████ ███ █████ ▀▀▀ █████ ███ ▄█▄
██████ █████ ███ █████ █████ ███ ████▄
██████ █████ ███ █████ ▄▄▄ █████ ███ █████
██████ █████ ███ ████ ███ █████ ███ ████▀
▀▀▀██▄ ▀▀▀▀▀▀▀▀▀▀ ▀▀▀▀▀▀▀▀▀▀ ▀▀▀▀▀▀▀▀▀▀ ██▀
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
https://loco.rs

environment: development
database: automigrate
logger: debug
compilation: debug
modes: server

listening on [::]:3000

Creating Notes

Create a new notes with the GraphQL mutator.

mutation {
notesCreateOne(
data: {
id: 1
title: "Notes 001"
content: "Content 001"
createdAt: "2024-06-24 00:00:00"
updatedAt: "2024-06-24 00:00:00"
}
) {
id
title
content
createdAt
updatedAt
}
}

Querying Notes

Query notes with its related files.

query {
notes {
nodes {
id
title
content
files {
nodes {
id
filePath
}
}
}
}
}

Adding User Authentication to GraphQL Endpoint

Our GraphQL handler can be accessed without user authentication. Next, we want to only allow logged in user to access the GraphQL handler.

To do so, we add _auth: auth::JWT to the graphql_handler function.

loco_seaography/src/controllers/graphql.rs
async fn graphql_handler(
+ _auth: auth::JWT,
State(ctx): State<AppContext>,
req: Request<Body>,
) -> Result<Response> {
const DEPTH: usize = 10;
const COMPLEXITY: usize = 100;
// Construct the the GraphQL query root
let schema = query_root::schema(ctx.db.clone(), DEPTH, COMPLEXITY).unwrap();
// GraphQL handler
let mut graphql_handler = async_graphql_axum::GraphQL::new(schema);
// Execute GraphQL request and fetch the results
let res = graphql_handler.call(req).await.unwrap();

Ok(res)
}

Then, run the Loco application and visit the GraphQL playground again. You should see unauthorize error.

Adding Authentication header to GraphQL Playground

First, we generate a valid authorization token by logging in the user account with the corresponding email and password:

$ curl --location 'http://localhost:3000/api/auth/login' \
--data-raw '{
"email": "cwchan.billy@gmail.com",
"password": "password"
}'

{
"token": "eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzUxMiJ9.eyJwaWQiOiIwN2NjMDk5Ni03YWYxLTQ5YmYtYmY2NC01OTg4ZjFhODM2OTkiLCJleHAiOjE3MTk4MjIzMzN9.CgKp_aE-DyAuBJIvFGJ6l68ooAlEiJGhjWeaetDtHrupaYDm0ldVxf24vj3fPgkCqZ_njv2129n2pSCzHOjaow",
"pid": "07cc0996-7af1-49bf-bf64-5988f1a83699",
"name": "Billy",
"is_verified": true
}

Go to the setting page of GraphQL playground. And add a new header under request. globalHeaders:

Then, we can access GraphQL handler as usual.

Conclusion

Adding GraphQL support to Loco application is easy with the help of Seaography. It is an ergonomic library that turns SeaORM entities into GraphQL nodes and provides a set of utilities and combined with a code generator makes GraphQL API building a breeze.

SQL Server Support

SQL Server for SeaORM is now available as a closed beta. If you are interested`, please signup here.

Migrating from sea-orm to sea-orm-x is straightforward with two simple steps. First, update the existing sea-orm dependency to sea-orm-x and enable the sqlz-mssql feature. Note that you might need to patch SeaORM dependency for the upstream dependencies.

Cargo.toml
sea-orm = { path = "<SEA_ORM_X_ROOT>/sea-orm-x", features = ["runtime-async-std-rustls", "sqlz-mssql"] }
sea-orm-migration = { path = "<SEA_ORM_X_ROOT>/sea-orm-x/sea-orm-migration" }

# Patch SeaORM dependency for the upstream dependencies
[patch.crates-io]
sea-orm = { path = "<SEA_ORM_X_ROOT>/sea-orm-x" }
sea-orm-migration = { path = "<SEA_ORM_X_ROOT>/sea-orm-x/sea-orm-migration" }

Second, update the connection string to connect to the MSSQL database.

# If the schema is `dbo`, simply write:
mssql://username:password@host/database

# Or, specify the schema name by providing an extra `currentSchema` query param.
mssql://username:password@host/database?currentSchema=my_schema

# You can trust peer certificate by providing an extra trustCertificate query param.
mssql://username:password@host/database?trustCertificate=true

SeaORM X has full Loco support and integrate seamlessly with many web frameworks:

  • Actix
  • Axum
  • Async GraphQL
  • jsonrpsee
  • Loco
  • Poem
  • Salvo
  • Tonic

Happy Coding!

· 11 min read
Billy Chan

In this tutorial, we would create a REST notepad backend starting from scratch and adding a new REST endpoint to handle file uploads in Loco.

The full source code can be found here. The documentation of the REST API is available here.

What is Loco?

Loco is a Rails inspired web framework for Rust. It includes many Rails feature with Rust ergonomics. Loco integrates seamlessly with SeaORM, offering a first-class development experience!

  • Controllers and routing via axum
  • Models, migration, and ActiveRecord via SeaORM
  • Views via serde
  • Seamless, Background jobs via sidekiq-rs, multi modal: in process, out of process, async via Tokio
  • ...and more

REST API Starter Template

Install loco-cli:

cargo install loco-cli

The loco-cli provides three starter templates:

  • SaaS Starter
  • Rest API Starter
  • Lightweight Service Starter

For this tutorial, we want the "Rest API Starter" template:

$ loco new

✔ You are inside a git repository. Do you wish to continue? · Yes
✔ App name? · loco_starter
✔ What would you like to build? · Rest API (with DB and user auth)

🚂 Loco app generated successfully in:
/sea-orm/examples/loco_starter

Next, we need to setup our PostgreSQL database.

docker run -d -p 5432:5432 -e POSTGRES_USER=loco -e POSTGRES_DB=loco_starter_development -e POSTGRES_PASSWORD="loco" postgres:15.3-alpine

If you want to use MySQL or SQLite as the database. Please update the database.uri configuration in loco_starter/config/development.yaml. And enable the corresponding database backend feature flag of SeaORM in loco_starter/Cargo.toml.

Now, start our REST application:

$ cargo loco start

Finished `dev` profile [unoptimized + debuginfo] target(s) in 1m 42s
Running `target/debug/loco_starter-cli start`
2024-05-20T06:56:42.724350Z INFO app: loco_rs::config: loading environment from selected_path="config/development.yaml" environment=development
2024-05-20T06:56:42.740338Z WARN app: loco_rs::boot: pretty backtraces are enabled (this is great for development but has a runtime cost for production. disable with `logger.pretty_backtrace` in your config yaml) environment=development
2024-05-20T06:56:42.833747Z INFO app: loco_rs::db: auto migrating environment=development
2024-05-20T06:56:42.845983Z INFO app: sea_orm_migration::migrator: Applying all pending migrations environment=development
2024-05-20T06:56:42.850231Z INFO app: sea_orm_migration::migrator: Applying migration 'm20220101_000001_users' environment=development
2024-05-20T06:56:42.864095Z INFO app: sea_orm_migration::migrator: Migration 'm20220101_000001_users' has been applied environment=development
2024-05-20T06:56:42.865799Z INFO app: sea_orm_migration::migrator: Applying migration 'm20231103_114510_notes' environment=development
2024-05-20T06:56:42.873653Z INFO app: sea_orm_migration::migrator: Migration 'm20231103_114510_notes' has been applied environment=development
2024-05-20T06:56:42.875645Z INFO app: loco_rs::boot: initializers loaded initializers="" environment=development
2024-05-20T06:56:42.906072Z INFO app: loco_rs::controller::app_routes: [GET] /api/_ping environment=development
2024-05-20T06:56:42.906176Z INFO app: loco_rs::controller::app_routes: [GET] /api/_health environment=development
2024-05-20T06:56:42.906264Z INFO app: loco_rs::controller::app_routes: [GET] /api/notes environment=development
2024-05-20T06:56:42.906335Z INFO app: loco_rs::controller::app_routes: [POST] /api/notes environment=development
2024-05-20T06:56:42.906414Z INFO app: loco_rs::controller::app_routes: [GET] /api/notes/:id environment=development
2024-05-20T06:56:42.906501Z INFO app: loco_rs::controller::app_routes: [DELETE] /api/notes/:id environment=development
2024-05-20T06:56:42.906558Z INFO app: loco_rs::controller::app_routes: [POST] /api/notes/:id environment=development
2024-05-20T06:56:42.906609Z INFO app: loco_rs::controller::app_routes: [POST] /api/auth/register environment=development
2024-05-20T06:56:42.906680Z INFO app: loco_rs::controller::app_routes: [POST] /api/auth/verify environment=development
2024-05-20T06:56:42.906753Z INFO app: loco_rs::controller::app_routes: [POST] /api/auth/login environment=development
2024-05-20T06:56:42.906838Z INFO app: loco_rs::controller::app_routes: [POST] /api/auth/forgot environment=development
2024-05-20T06:56:42.906931Z INFO app: loco_rs::controller::app_routes: [POST] /api/auth/reset environment=development
2024-05-20T06:56:42.907012Z INFO app: loco_rs::controller::app_routes: [GET] /api/user/current environment=development
2024-05-20T06:56:42.907309Z INFO app: loco_rs::controller::app_routes: [Middleware] Adding limit payload data="5mb" environment=development
2024-05-20T06:56:42.907440Z INFO app: loco_rs::controller::app_routes: [Middleware] Adding log trace id environment=development
2024-05-20T06:56:42.907714Z INFO app: loco_rs::controller::app_routes: [Middleware] Adding cors environment=development
2024-05-20T06:56:42.907788Z INFO app: loco_rs::controller::app_routes: [Middleware] Adding etag layer environment=development

▄ ▀
▀ ▄
▄ ▀ ▄ ▄ ▄▀
▄ ▀▄▄
▄ ▀ ▀ ▀▄▀█▄
▀█▄
▄▄▄▄▄▄▄ ▄▄▄▄▄▄▄▄▄ ▄▄▄▄▄▄▄▄▄▄▄ ▄▄▄▄▄▄▄▄▄ ▀▀█
██████ █████ ███ █████ ███ █████ ███ ▀█
██████ █████ ███ █████ ▀▀▀ █████ ███ ▄█▄
██████ █████ ███ █████ █████ ███ ████▄
██████ █████ ███ █████ ▄▄▄ █████ ███ █████
██████ █████ ███ ████ ███ █████ ███ ████▀
▀▀▀██▄ ▀▀▀▀▀▀▀▀▀▀ ▀▀▀▀▀▀▀▀▀▀ ▀▀▀▀▀▀▀▀▀▀ ██▀
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
https://loco.rs

environment: development
database: automigrate
logger: debug
compilation: debug
modes: server

listening on [::]:3000

From the log messages printed above, we saw:

  • Database migrations have been applied
  • All available REST API

To check if the application listen for requests:

$ curl --location 'http://localhost:3000/api/_ping'

{"ok":true}

User Management

The starter template comes with a basic user management module.

Registration

It is a common practice to send a verification email to the provided email. However, that would requires a SMTP server and this is not the focus of this blog post. So, I will skip the email verification:

loco_starter/src/controllers/auth.rs
#[debug_handler]
async fn register(
State(ctx): State<AppContext>,
Json(params): Json<RegisterParams>,
) -> Result<Response> {
let res = users::Model::create_with_password(&ctx.db, &params).await;

let user = match res {
Ok(user) => user,
Err(err) => {
tracing::info!(
message = err.to_string(),
user_email = &params.email,
"could not register user",
);
return format::json(());
}
};

+ // Skip email verification, all new registrations are considered verified
+ let _user = user
+ .into_active_model()
+ .verified(&ctx.db)
+ .await?;

+ // Skip sending verification email as we don't have a mail server
+ /*
let user = user
.into_active_model()
.set_email_verification_sent(&ctx.db)
.await?;

AuthMailer::send_welcome(&ctx, &user).await?;
+ */

format::json(())
}

Compile and run the application, then register a new user account:

$ curl --location 'http://localhost:3000/api/auth/register' \
--data-raw '{
"name": "Billy",
"email": "cwchan.billy@gmail.com",
"password": "password"
}'

null

Login

You should see there is a new row of user in the database.

Next, we login the user account with the corresponding email and password:

$ curl --location 'http://localhost:3000/api/auth/login' \
--data-raw '{
"email": "cwchan.billy@gmail.com",
"password": "password"
}'

{
"token": "eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzUxMiJ9.eyJwaWQiOiIxMWQwMWFmMy02ZmUyLTQ0ZjMtODlmMC1jMDJjZWMzOTc0MWQiLCJleHAiOjE3MTY3OTU3NjR9.i1OElxy33rkorkxk6QpTG1Kg4_Q8O0jqBJ2i82nltkcQYZsLmSSnrxtdtlfdvV0ccJ3hQA3JoY9L13cjz2uSCw",
"pid": "11d01af3-6fe2-44f3-89f0-c02cec39741d",
"name": "Billy",
"is_verified": true
}

Authentication

The JWT token above will be used in user authentication. You must set the Authorization header to access any REST endpoint that requires user login.

For example, fetching the user info of the current user:

$ curl --location 'http://localhost:3000/api/user/current' \
--header 'Authorization: Bearer eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzUxMiJ9.eyJwaWQiOiIxMWQwMWFmMy02ZmUyLTQ0ZjMtODlmMC1jMDJjZWMzOTc0MWQiLCJleHAiOjE3MTY3OTU3NjR9.i1OElxy33rkorkxk6QpTG1Kg4_Q8O0jqBJ2i82nltkcQYZsLmSSnrxtdtlfdvV0ccJ3hQA3JoY9L13cjz2uSCw'

{
"pid":"11d01af3-6fe2-44f3-89f0-c02cec39741d",
"name":"Billy",
"email":"cwchan.billy@gmail.com"
}

Handling REST Requests

The starter application comes with a notes controller for the notes table.

Create Notes

$ curl --location 'http://localhost:3000/api/notes' \
--header 'Authorization: Bearer eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzUxMiJ9.eyJwaWQiOiIxMWQwMWFmMy02ZmUyLTQ0ZjMtODlmMC1jMDJjZWMzOTc0MWQiLCJleHAiOjE3MTY3OTU3NjR9.i1OElxy33rkorkxk6QpTG1Kg4_Q8O0jqBJ2i82nltkcQYZsLmSSnrxtdtlfdvV0ccJ3hQA3JoY9L13cjz2uSCw' \
--data '{
"title": "Getting Started with Loco & SeaORM",
"content": "In this tutorial, we would create an REST notepad backend starting from scratch and adding a new REST endpoint to handle file uploads."
}'

{
"created_at": "2024-05-20T08:43:45.408449",
"updated_at": "2024-05-20T08:43:45.408449",
"id": 1,
"title": "Getting Started with Loco & SeaORM",
"content": "In this tutorial, we would create an REST notepad backend starting from scratch and adding a new REST endpoint to handle file uploads."
}

List Notes

$ curl --location 'http://localhost:3000/api/notes' \
--header 'Authorization: Bearer eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzUxMiJ9.eyJwaWQiOiIxMWQwMWFmMy02ZmUyLTQ0ZjMtODlmMC1jMDJjZWMzOTc0MWQiLCJleHAiOjE3MTY3OTU3NjR9.i1OElxy33rkorkxk6QpTG1Kg4_Q8O0jqBJ2i82nltkcQYZsLmSSnrxtdtlfdvV0ccJ3hQA3JoY9L13cjz2uSCw'

[
{
"created_at": "2024-05-20T08:43:45.408449",
"updated_at": "2024-05-20T08:43:45.408449",
"id": 1,
"title": "Getting Started with Loco & SeaORM",
"content": "In this tutorial, we would create an REST notepad backend starting from scratch and adding a new REST endpoint to handle file uploads."
},
{
"created_at": "2024-05-20T08:45:38.973130",
"updated_at": "2024-05-20T08:45:38.973130",
"id": 2,
"title": "Introducing SeaORM X",
"content": "SeaORM X is built on top of SeaORM with support for SQL Server"
}
]

Get Notes

$ curl --location 'http://localhost:3000/api/notes/2' \
--header 'Authorization: Bearer eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzUxMiJ9.eyJwaWQiOiIxMWQwMWFmMy02ZmUyLTQ0ZjMtODlmMC1jMDJjZWMzOTc0MWQiLCJleHAiOjE3MTY3OTU3NjR9.i1OElxy33rkorkxk6QpTG1Kg4_Q8O0jqBJ2i82nltkcQYZsLmSSnrxtdtlfdvV0ccJ3hQA3JoY9L13cjz2uSCw'

{
"created_at": "2024-05-20T08:45:38.973130",
"updated_at": "2024-05-20T08:45:38.973130",
"id": 2,
"title": "Introducing SeaORM X",
"content": "SeaORM X is built on top of SeaORM with support for SQL Server"
}

Handling File Uploads

Next, we will add a file upload feature where user can upload files that is related to the notes.

File Table Migration

Create a migration file for the new files table. Each row of files reference a specific notes in the database.

loco_starter/migration/src/m20240520_173001_files.rs
use sea_orm_migration::{prelude::*, schema::*};

use super::m20231103_114510_notes::Notes;

#[derive(DeriveMigrationName)]
pub struct Migration;

#[async_trait::async_trait]
impl MigrationTrait for Migration {
async fn up(&self, manager: &SchemaManager) -> Result<(), DbErr> {
manager
.create_table(
table_auto(Files::Table)
.col(pk_auto(Files::Id))
.col(integer(Files::NotesId))
.col(string(Files::FilePath))
.foreign_key(
ForeignKey::create()
.name("FK_files_notes_id")
.from(Files::Table, Files::NotesId)
.to(Notes::Table, Notes::Id),
)
.to_owned(),
)
.await
}

async fn down(&self, manager: &SchemaManager) -> Result<(), DbErr> {
manager
.drop_table(Table::drop().table(Files::Table).to_owned())
.await
}
}

#[derive(DeriveIden)]
pub enum Files {
Table,
Id,
NotesId,
FilePath,
}

Then, we need to enable the new migration.

loco_starter/migration/src/lib.rs
#![allow(elided_lifetimes_in_paths)]
#![allow(clippy::wildcard_imports)]
pub use sea_orm_migration::prelude::*;

mod m20220101_000001_users;
mod m20231103_114510_notes;
+ mod m20240520_173001_files;

pub struct Migrator;

#[async_trait::async_trait]
impl MigratorTrait for Migrator {
fn migrations() -> Vec<Box<dyn MigrationTrait>> {
vec![
Box::new(m20220101_000001_users::Migration),
Box::new(m20231103_114510_notes::Migration),
+ Box::new(m20240520_173001_files::Migration),
]
}
}

Compile and start the application, it should run our new migration on startup.

$ cargo loco start

...
2024-05-20T09:39:59.607525Z INFO app: loco_rs::db: auto migrating environment=development
2024-05-20T09:39:59.611997Z INFO app: sea_orm_migration::migrator: Applying all pending migrations environment=development
2024-05-20T09:39:59.621699Z INFO app: sea_orm_migration::migrator: Applying migration 'm20240520_173001_files' environment=development
2024-05-20T09:39:59.643886Z INFO app: sea_orm_migration::migrator: Migration 'm20240520_173001_files' has been applied environment=development
...

File Model Definition

Define files entity model.

loco_starter/src/models/_entities/files.rs
use sea_orm::entity::prelude::*;
use serde::{Deserialize, Serialize};

#[derive(Clone, Debug, PartialEq, DeriveEntityModel, Eq, Serialize, Deserialize)]
#[sea_orm(table_name = "files")]
pub struct Model {
pub created_at: DateTime,
pub updated_at: DateTime,
#[sea_orm(primary_key)]
pub id: i32,
pub notes_id: i32,
pub file_path: String,
}

#[derive(Copy, Clone, Debug, EnumIter, DeriveRelation)]
pub enum Relation {
#[sea_orm(
belongs_to = "super::notes::Entity",
from = "Column::NotesId",
to = "super::notes::Column::Id"
)]
Notes,
}

impl Related<super::notes::Entity> for Entity {
fn to() -> RelationDef {
Relation::Notes.def()
}
}

Implement the ActiveModelBehavior in the parent module.

loco_starter/src/models/files.rs
use sea_orm::entity::prelude::*;

use super::_entities::files::ActiveModel;

impl ActiveModelBehavior for ActiveModel {
// extend activemodel below (keep comment for generators)
}

File Controller

Controller is where we handle the file uploading, listing and viewing.

Upload File

The following upload handler allows multiple files to be uploaded in a single POST request.

loco_starter/src/controllers/files.rs
#[debug_handler]
pub async fn upload(
_auth: auth::JWT,
Path(notes_id): Path<i32>,
State(ctx): State<AppContext>,
mut multipart: Multipart,
) -> Result<Response> {
// Collect all uploaded files
let mut files = Vec::new();

// Iterate all files in the POST body
while let Some(field) = multipart.next_field().await.map_err(|err| {
tracing::error!(error = ?err,"could not readd multipart");
Error::BadRequest("could not readd multipart".into())
})? {
// Get the file name
let file_name = match field.file_name() {
Some(file_name) => file_name.to_string(),
_ => return Err(Error::BadRequest("file name not found".into())),
};

// Get the file content as bytes
let content = field.bytes().await.map_err(|err| {
tracing::error!(error = ?err,"could not readd bytes");
Error::BadRequest("could not readd bytes".into())
})?;

// Create a folder to store the uploaded file
let now = chrono::offset::Local::now()
.format("%Y%m%d_%H%M%S")
.to_string();
let uuid = uuid::Uuid::new_v4().to_string();
let folder = format!("{now}_{uuid}");
let upload_folder = PathBuf::from(UPLOAD_DIR).join(&folder);
fs::create_dir_all(&upload_folder).await?;

// Write the file into the newly created folder
let path = upload_folder.join(file_name);
let mut f = fs::OpenOptions::new()
.create_new(true)
.write(true)
.open(&path)
.await?;
f.write_all(&content).await?;
f.flush().await?;

// Record the file upload in database
let file = files::ActiveModel {
notes_id: ActiveValue::Set(notes_id),
file_path: ActiveValue::Set(
path.strip_prefix(UPLOAD_DIR)
.unwrap()
.to_str()
.unwrap()
.to_string(),
),
..Default::default()
}
.insert(&ctx.db)
.await?;

files.push(file);
}

format::json(files)
}

Try uploading multiple files in a single POST request:

All uploaded files are saved into the uploads directory:

List File

List all files that are related to a specific notes_id.

loco_starter/src/controllers/files.rs
#[debug_handler]
pub async fn list(
_auth: auth::JWT,
Path(notes_id): Path<i32>,
State(ctx): State<AppContext>,
) -> Result<Response> {
// Fetch all files uploaded for a specific notes
let files = files::Entity::find()
.filter(files::Column::NotesId.eq(notes_id))
.order_by_asc(files::Column::Id)
.all(&ctx.db)
.await?;

format::json(files)
}

View File

View a specific files.

loco_starter/src/controllers/files.rs
#[debug_handler]
pub async fn view(
_auth: auth::JWT,
Path(files_id): Path<i32>,
State(ctx): State<AppContext>,
) -> Result<Response> {
// Fetch the file info from database
let file = files::Entity::find_by_id(files_id)
.one(&ctx.db)
.await?
.expect("File not found");

// Stream the file
let file = fs::File::open(format!("{UPLOAD_DIR}/{}", file.file_path)).await?;
let stream = ReaderStream::new(file);
let body = Body::from_stream(stream);

Ok(format::render().response().body(body)?)
}

File Controller Routes

Add our newly defined files handler to the application routes.

loco_starter/src/controllers/files.rs
pub fn routes() -> Routes {
// Bind the routes
Routes::new()
.prefix("files")
.add("/upload/:notes_id", post(upload))
.add("/list/:notes_id", get(list))
.add("/view/:files_id", get(view))
}
loco_starter/src/app.rs
pub struct App;

#[async_trait]
impl Hooks for App {
// ...

fn routes(_ctx: &AppContext) -> AppRoutes {
AppRoutes::with_default_routes()
.prefix("/api")
.add_route(controllers::notes::routes())
.add_route(controllers::auth::routes())
.add_route(controllers::user::routes())
+ .add_route(controllers::files::routes())
}

// ...
}

Extra Rust Dependencies

Remember to enable multipart in axum and add tokio-util dependency.

loco_starter/Cargo.toml
- axum = "0.7.1"
+ axum = { version = "0.7.1", features = ["multipart"] }

+ tokio-util = "0.7.11"

SQL Server Support

SQL Server for SeaORM is now available as a closed beta. If you are interested, please signup here.

Migrating from sea-orm to sea-orm-x is straightforward with two simple steps. First, update the existing sea-orm dependency to sea-orm-x and enable the sqlz-mssql feature. Note that you might need to patch SeaORM dependency for the upstream dependencies.

Cargo.toml
sea-orm = { path = "<SEA_ORM_X_ROOT>/sea-orm-x", features = ["runtime-async-std-rustls", "sqlz-mssql"] }
sea-orm-migration = { path = "<SEA_ORM_X_ROOT>/sea-orm-x/sea-orm-migration" }

# Patch SeaORM dependency for the upstream dependencies
[patch.crates-io]
sea-orm = { path = "<SEA_ORM_X_ROOT>/sea-orm-x" }
sea-orm-migration = { path = "<SEA_ORM_X_ROOT>/sea-orm-x/sea-orm-migration" }

Second, update the connection string to connect to the MSSQL database.

# If the schema is `dbo`, simply write:
mssql://username:password@host/database

# Or, specify the schema name by providing an extra `currentSchema` query param.
mssql://username:password@host/database?currentSchema=my_schema

# You can trust peer certificate by providing an extra trustCertificate query param.
mssql://username:password@host/database?trustCertificate=true

SeaORM X has full Loco support and integrate seamlessly with many web frameworks:

  • Actix
  • Axum
  • Async GraphQL
  • jsonrpsee
  • Loco
  • Poem
  • Salvo
  • Tonic

Happy Coding!

· 8 min read
Chris Tsang

This story stems from the saying "What Color is Your Function?" as a criticism to the async implementation of common programming languages. Well, Rust also falls into the category of "colored functions". So in this blog post, let's see how we can design systems to effectively combine sync and async code.

Rainbow bridge is a reference to the bridge in Thor that teleports you between different realms - a perfect analogy!

Background

Sync code can be blocking IO, or expensive computation. Async code is usually network IO where you'd wait for results.

In both cases, we want to maximize concurrency, such that the program can make full use of the CPU instead of sitting there idle. A common approach is message passing, where we package tasks and send them to different workers for execution.

Sync -> Sync

Let's start with the classic example, pure sync code. There exists std::sync::mpsc in the standard library, so let's take a look.

use std::sync::mpsc::channel;

// create an unbounded channel
let (sender, receiver) = channel();

// never blocks
sender.send("Hello".to_string()).unwrap();

let handle = std::thread::spawn(move|| {
// wait until there is a message
let message = receiver.recv().unwrap();
println!("{message}");
});

handle.join().unwrap();
println!("Bye");

Prints (Playground):

Hello
Bye

Now, we'll make a more elaborate example: a program that spawns a number of worker threads to perform some 'expensive' computation. The main thread would dispatch the tasks to those threads and in turn collect the results via another channel.

┌─────────────┐    tasks    ┌─────────────────┐   result
│ ╞═════════════╡ worker thread 1 ╞═════════════╗ ┌─────────────┐
│ main thread │ ├─────────────────┤ ╠════╡ main thread │
│ ╞═════════════╡ worker thread 2 ╞═════════════╝ └─────────────┘
└─────────────┘ └─────────────────┘

First, setup the channels.

let (result, collector) = channel(); // result
let mut senders = Vec::new();
for _ in 0..THREADS {
let (sender, receiver) = channel(); // tasks
senders.push(sender);
let result = result.clone();
std::thread::spawn(move || worker(receiver, result));
}

The worker thread looks like:

fn worker(receiver: Receiver<Task>, sender: Sender<Done>) {
while let Ok(task) = receiver.recv() {
let result = process(task);
sender.send(result).unwrap();
}
}

Then, dispatch tasks.

for c in 0..TASKS {
let task = some_random_task();
senders[c % THREADS].send(task).unwrap();
}

Finally, we can collect results.

for _ in 0..TASKS {
let result = collector.recv().unwrap();
println!("{result:?}");
}

Full source code can be found here.

Async -> Async

Next, we'll migrate to async land. Using tokio::sync::mpsc, it's very similar to the above example, except every operation is async and thus imposes additional restrictions to lifetimes. (The trick is, just move / clone. Don't borrow)

tokio's unbounded_channel is the equivalent to std's channel. Otherwise it's very similar. The spawn method takes in a Future; since the worker needs to take in the channels, we construct an async closure with async move {}.

stdtokio
(unbounded) channelunbounded_channel
sync_channel(bounded) channel
let (result, mut collector) = unbounded_channel();
let mut senders = Vec::new();
for _ in 0..WORKERS {
let (sender, mut receiver) = unbounded_channel();
senders.push(sender);
let result = result.clone();
tokio::task::spawn(async move {
while let Some(task) = receiver.recv().await {
result.send(process(task).await).unwrap();
}
});
}
std::mem::drop(result); // <-- ?

Why do we need to drop the result sender? This is one of the foot gun: tokio would swallow panics originated within the task, and so if that happened, the program would never exit. By dropping the last copy of result in scope, the channel would automatically close after all tasks exit, which in turn would triggle up to our collector.

The rest is almost the same.

for (i, task) in tasks.iter().enumerate() {
senders[i % WORKERS].send(task.clone()).unwrap();
}
std::mem::drop(senders);

for _ in 0..tasks.len() {
let result = collector.recv().await.unwrap();
println!("{result:?}");
}

Full source code can be found here.

Flume mpmc

mpmc - multi producer, multi consumer

The previous examples have a flaw: we have to spawn multiple mpsc channels to send tasks, which is:

  1. clumsy. we need to keep a list of senders
  2. not the most efficient. is round-robin the best way of distributing tasks? some of the workers may remain idle

Here is the ideal setup:

                      tasks   ┌─────────────────┐   result
┌─────────────┐ ╔═══════════╡ worker thread 1 ╞════════════╗ ┌─────────────┐
│ main thread ╞═══╣ ├─────────────────┤ ╠═══╡ main thread │
└─────────────┘ ╚═══════════╡ worker thread 2 ╞════════════╝ └─────────────┘
└─────────────────┘

Let's rewrite our example using Flume. But first, know the mapping between tokio and flume:

TokioFlume
unbounded_channelunbounded (channel)
(bounded) channelbounded (channel)
sendsend
recvrecv_async

In tokio, the method is exclusive: async fn recv(&mut self); in flume, the method is fn recv_async(&self) -> RecvFut. The type signature already told you the distinction between mpsc vs mpmc! It is wrong to use the blocking recv method in async context in flume, but sadly the compiler would not warn you about it.

The channel setup is now slightly simpler:

let (sender, receiver) = unbounded(); // task
let (result, collector) = unbounded(); // result

for _ in 0..WORKERS {
let receiver = receiver.clone();
let result = result.clone();
tokio::task::spawn(async move {
while let Ok(task) = receiver.recv_async().await {
result.send(process(task).await).unwrap();
}
});
}

We no longer have to dispatch tasks ourselves. All workers share the same task queue, and thus workers would fetch the next task as soon as the previous one is finished - effectively load balance among themselves!

for task in &tasks {
sender.send(task.clone()).unwrap();
}

for _ in 0..tasks.len() {
let result = collector.recv_async().await.unwrap();
println!("{result:?}");
}

Full source code can be found here.

Sync -> Async

In the final example, let's consider a program that is mostly sync, but has a few async operations that we want to handle in a background thread.

In the example below, our blocking operation is 'reading from stdin' from the main thread. And we send those lines to an async thread to handle.

┌─────────────┐           ┌──────────────┐
│ main thread ╞═══════════╡ async thread │
└─────────────┘ └──────────────┘

It follows the usual 3 steps:

  1. create a flume channel
  2. pass the receiver end to a worker thread
  3. send tasks over the channel
fn main() -> Result<()> {
let (sender, receiver) = unbounded(); // flume channel

std::thread::spawn(move || {
// this runtime is single-threaded
let rt = tokio::runtime::Builder::new_current_thread().enable_all().build().unwrap();
rt.block_on(handler(receiver))
});

loop {
let mut line = String::new();
// this blocks the current thread until there is a new line
match std::io::stdin().read_line(&mut line) {
Ok(0) => break, // this means stdin is closed
Ok(_) => (),
Err(e) => panic!("{e:?}"),
}
sender.send(line)?;
}

Ok(())
}

This is the handler:

async fn handler(receiver: Receiver<String>) -> Result<()> {
while let Ok(line) = receiver.recv_async().await {
process(line).await?;
}
Ok(())
}

It doesn't look much different from the async -> async example, the only difference is one side is sync! Full source code can be found here.

Graceful shutdown

The above code has a problem: we never know whether a line has been processed. If the program has an exit mechanism from handling sigint, there is a possibility of exiting before all the lines has been processed.

Let's see how we can shutdown properly.

let handle = std::thread::spawn(..);

// running is an AtomicBool
while running.load(Ordering::Acquire) {
let line = read_line_from_stdin();
sender.send(line)?;
}

std::mem::drop(sender);
handle.join().unwrap().unwrap();

The shutdown sequence has 3 steps:

  1. we first obtain the JoinHandle to the thread
  2. we drop all copies of sender, effectively closing the channel
  3. in the worker thread, receiver.recv_async() would result in an error, as stated in the docs

    Asynchronously receive a value from the channel, returning an error if all senders have been dropped.

  4. the worker thread finishes, joining the main thread

Async -> Sync

The other way around is equally simple, as illustrated in SeaStreamer's example.

Conclusion

syncasync
to spawn workerstd::thread::spawntokio::task::spawn
concurrencymulti-threadedcan be multi-threaded or single-threaded
worker isFnOnceFuture
send message withsendsend
receive message withrecvrecv_async
waiting for messagesblockingyield to runtime

In this article we discussed:

  1. Multi-threaded parallelism in sync realm
  2. Concurrency in async realm - with tokio and flume
  3. Bridging sync and async code with flume

Now you already learnt the powers of flume, but there is more!

In the next episode, hopefully we will get to discuss other interesting features of flume - bounded channels and 'rendezvous channels'. Follow our X / Twitter for updates!

Rustacean Sticker Pack 🦀

The Rustacean Sticker Pack is the perfect way to express your passion for Rust. Our stickers are made with a premium water-resistant vinyl with a unique matte finish. Stick them on your laptop, notebook, or any gadget to show off your love for Rust!

Moreover, all proceeds contributes directly to the ongoing development of SeaQL projects.

Sticker Pack Contents:

  • Logo of SeaQL projects: SeaQL, SeaORM, SeaQuery, Seaography, FireDBG
  • Mascot of SeaQL: Terres the Hermit Crab
  • Mascot of Rust: Ferris the Crab
  • The Rustacean word

Support SeaQL and get a Sticker Pack!

Rustacean Sticker Pack by SeaQL

· 6 min read
Chris Tsang

This tutorial shows you how to use Rust to build a system that:

  1. Subscribe to a real-time websocket data feed
  2. Stream the data to Kafka / Redis
  3. Save the data into a SQL database

Here, we'll employ a micro-services architecture, and split the functionality into two apps:

┌─────────────────────┐      ─────────────────      ┌───────────────┐
│ Websocket Data Feed │ ---> Redis / Kafka ---> │ SQL Data Sink │
└─────────────────────┘ ───────────────── └───────────────┘

In stream processing, we often use the terms "source" / "sink", but a data sink is simply a stream consumer that persists the data into a store.

On the source side, we'd use SeaStreamer. On the sink side, we'd be using SeaORM. Below are the supported technologies; for the rest of this article, we'll be using Redis and SQLite because they're easy to setup.

SeaStreamerSeaORM
Kafka, RedisMySQL, Postgres, SQLite, SQL Server1

To get started, you can quickly start a Redis instance via Docker:

docker run -d --rm --name redis -p 6379:6379 redis

1. Websocket subscription

Let's write a websocket subscriber in Rust. Here we'd use the awesome async-tungstenite library.

We'd subscribe to the GBP/USD price feed from Kraken, API documentation can be found here. NB: they're not real FX data, but should be good enough for demo.

Step 1, create a websocket connection:

let (mut ws, _) = async_tungstenite::tokio::connect_async("wss://ws.kraken.com/").await?;

Step 2, send a subscription request:

ws.send(Message::Text(
r#"{ "event": "subscribe", "pair": ["GBP/USD"], "subscription": { "name": "spread" } }"#.to_owned(),
)).await?;

Step 3, stream the messages:

loop {
match ws.next().await {
Some(Ok(Message::Text(data))) => {
if data == r#"{"event":"heartbeat"}"# {
continue;
}
println!("{data}");
}
Some(Err(e)) => bail!("Socket error: {e}"),
None => bail!("Stream ended"),
e => bail!("Unexpected message {e:?}"),
}
}

2. Redis / Kafka Stream Producer

Step 1, create a SeaStreamer instance connecting to Redis / Kafka:

let streamer = SeaStreamer::connect(
"redis://localhost", SeaConnectOptions::default()
).await?;

There are a bunch of different options for Redis & Kafka respectively, you can refer to SeaStreamer's documentation.

Step 2, create a producer:

let producer: SeaProducer = streamer
.create_producer(
"GBP_USD".parse()?, // Stream Key
Default::default(), // Producer Options
)
.await?;

There aren't any specific options for Producer.

Step 3, decode the messages:

let spread: SpreadMessage = serde_json::from_str(&data)?;
let message = serde_json::to_string(&spread)?;

Here, we use the awesome serde library to perform message parsing and conversion:

// The raw message looks like: [80478222,["1.25475","1.25489","1714946803.030088","949.74917071","223.36195920"],"spread","GBP/USD"]

#[derive(Debug, Serialize, Deserialize)]
struct SpreadMessage {
#[allow(dead_code)]
#[serde(skip_serializing)]
channel_id: u32, // placeholder; not needed
spread: Spread, // nested object
channel_name: String,
pair: String,
}

#[derive(Debug, Serialize, Deserialize)]
struct Spread {
bid: Decimal,
ask: Decimal,
#[serde(with = "timestamp_serde")] // custom serde
timestamp: Timestamp,
bid_vol: Decimal,
ask_vol: Decimal,
}

Step 4, send the messages:

loop {
match ws.next().await {
Some(Ok(Message::Text(data))) => {
let spread: SpreadMessage = serde_json::from_str(&data)?;
let message = serde_json::to_string(&spread)?;
producer.send(message)?; // <--
}
}
}

Note that the producer.send call is not async/await, and this is a crucial detail! This removes the stream processing bottleneck. Behind the scene, messages will be buffered and handled on a different thread, so that your input stream can run as close to real-time as possible.

Here is the complete price-feed app which you can checkout from the SeaStreamer repository:

$ cd examples/price-feed
$ cargo run

Connecting ..
Connected.
Subscribed.
{"spread":{"bid":"1.25495","ask":"1.25513","timestamp":"2024-05-05T16:31:00.961214","bid_vol":"61.50588918","ask_vol":"787.90883861"},"channel_name":"spread","pair":"GBP/USD"}
..

3. SQL Data Sink

Step 1, create a stream consumer:

let streamer = SeaStreamer::connect(streamer_uri, Default::default()).await?;

let consumer = streamer
.create_consumer(&[stream_key], SeaConsumerOptions::default())
.await?;

There are a bunch of different options for Redis & Kafka respectively, you can refer to SeaStreamer's examples. Here we use the default, which is a real-time state-less stream consumer.

Step 2, create a database:

let mut opt = ConnectOptions::new("sqlite://my_db.sqlite?mode=rwc"));
opt.max_connections(1).sqlx_logging(false);
let db = Database::connect(opt).await?;

We set max_connections to 1, because our data sink will not do concurrent inserts anyway.

Here is the Entity:

#[derive(Debug, Clone, PartialEq, Eq, DeriveEntityModel, Deserialize)]
#[sea_orm(table_name = "event")]
pub struct Model {
#[sea_orm(primary_key)]
#[serde(default)]
pub id: i32,
pub timestamp: String,
pub bid: String,
pub ask: String,
pub bid_vol: String,
pub ask_vol: String,
}

The table shall be named event and we derive Deserialize on the Model.

We will use the following helper method to create the database table, where the schema is derived from the Entity:

async fn create_tables(db: &DbConn) -> Result<(), DbErr> {
let builder = db.get_database_backend();
let schema = Schema::new(builder);

let stmt = builder.build(
schema.create_table_from_entity(Entity).if_not_exists(),
);
log::info!("{stmt}");
db.execute(stmt).await?;

Ok(())
}

This is especially handy for SQLite, where the app owns the database schema. For other databases, you'd probably use the SeaORM migration system.

Step 3, insert the data into database:

loop {
let message = consumer.next().await?;
let payload = message.message();
let json = payload.as_str()?;
let item: Item = serde_json::from_str(json)?;
let mut spread = item.spread.into_active_model();
spread.id = NotSet; // let the db assign primary key
spread.save(&db).await?;
}

In a few lines of code, we:

  1. receive the message from Redis
  2. decode the message as JSON
  3. convert the message into a SeaORM Model
  4. insert the Model into database

Run the sea-orm-sink app in another terminal:

$ cd examples/sea-orm-sink
$ RUST_LOG=info cargo run

[INFO sea_streamer_sea_orm_sink] CREATE TABLE IF NOT EXISTS "event" ( "id" integer NOT NULL PRIMARY KEY AUTOINCREMENT, "timestamp" varchar NOT NULL, "bid" varchar NOT NULL, "ask" varchar NOT NULL, "bid_vol" varchar NOT NULL, "ask_vol" varchar NOT NULL )
[INFO sea_streamer_sea_orm_sink] {"spread":{"bid":"1.25495","ask":"1.25513","timestamp":"2024-05-05T16:31:00.961214","bid_vol":"61.50588918","ask_vol":"787.90883861"},"channel_name":"spread","pair":"GBP/USD"}

That's it! Now you can inspect the data with your favourite database GUI and write some SQL queries:

screenshot of SQLite database

Conclusion

In this article, we covered:

  1. Micro-services architecture in stream processing
  2. Async real-time programming in Rust
  3. The awesomeness of the SeaQL and Rust ecosystem2

Here are a few suggestions how you can take it from here:

  1. Stream the data to a "big database" like MySQL or Postgres
  2. Subscribe to more streams and sink to more tables
  3. Buffer the events and insert the data in batches to achieve higher throughput, further reads:

Rustacean Sticker Pack 🦀

The Rustacean Sticker Pack is the perfect way to express your passion for Rust. Our stickers are made with a premium water-resistant vinyl with a unique matte finish. Stick them on your laptop, notebook, or any gadget to show off your love for Rust!

Moreover, all proceeds contributes directly to the ongoing development of SeaQL projects.

Sticker Pack Contents:

  • Logo of SeaQL projects: SeaQL, SeaORM, SeaQuery, Seaography, FireDBG
  • Mascot of SeaQL: Terres the Hermit Crab
  • Mascot of Rust: Ferris the Crab
  • The Rustacean word

Support SeaQL and get a Sticker Pack!

Rustacean Sticker Pack by SeaQL

· 10 min read
SeaQL Team
SeaORM 1.0-rc Banner

This blog post summarizes the new features and enhancements introduced in SeaORM 1.0-rc.x:

New Features

Refreshed migration schema definition

#2099 We are aware that SeaORM's migration scripts can sometimes look verbose. Thanks to the clever design made by Loco, we've refreshed the schema definition syntax.

An old migration script looks like this:

#[async_trait::async_trait]
impl MigrationTrait for Migration {
async fn up(&self, manager: &SchemaManager) -> Result<(), DbErr> {
manager
.create_table(
Table::create()
.table(Users::Table)
.if_not_exists()
.col(
ColumnDef::new(Users::Id)
.integer()
.not_null()
.auto_increment()
.primary_key(),
)
.col(ColumnDef::new(Users::Pid).uuid().not_null())
.col(ColumnDef::new(Users::Email).string().not_null().unique_key())
// ...
}
}

Now, using the new schema helpers, you can define the schema with a simplified syntax!

// Remember to import `sea_orm_migration::schema::*`
use sea_orm_migration::{prelude::*, schema::*};

#[derive(DeriveMigrationName)]
pub struct Migration;

#[async_trait::async_trait]
impl MigrationTrait for Migration {
async fn up(&self, manager: &SchemaManager) -> Result<(), DbErr> {
manager
.create_table(
Table::create()
.table(Users::Table)
.if_not_exists()
.col(pk_auto(Users::Id)) // Primary key with auto-increment
.col(uuid(Users::Pid)) // UUID column
.col(string_uniq(Users::Email)) // String column with unique and not null constraint
.col(string(Users::Password)) // String column
.col(string(Users::ApiKey).unique_key())
.col(string(Users::Name))
.col(string_null(Users::ResetToken)) // Nullable string column
.col(timestamp_null(Users::ResetSentAt)) // Nullable timestamp column
.col(string_null(Users::EmailVerificationToken))
.col(timestamp_null(Users::EmailVerificationSentAt))
.col(timestamp_null(Users::EmailVerifiedAt))
.to_owned(),
)
.await
}

// ...
}

There are three variants for each commonly used column type:

  • <COLUMN_TYPE>() helper function, e.g. string(), define a non-null string column
  • <COLUMN_TYPE>_null() helper function, e.g. string_null(), define a nullable string column
  • <COLUMN_TYPE>_uniq() helper function, e.g. string_uniq(), define a non-null and unique string column

The new schema helpers can be used by importing sea_orm_migration::schema::*. The migration library is fully backward compatible, so there is no rush to migrate old scripts. The new syntax is recommended for new scripts, and all examples in the SeaORM repository have been updated for demonstration. For advanced use cases, the old SeaQuery syntax can still be used.

Reworked SQLite Type Mappings

sea-orm#2077 sea-query#735 sea-schema#117 We've reworked the type mappings for SQLite across the SeaQL ecosystem, such that SeaQuery and SeaSchema are now reciprocal to each other. Migrations written with SeaQuery can be rediscovered by sea-orm-cli and generate compatible entities! In other words, the roundtrip is complete.

Data types will be mapped to SQLite types with a custom naming scheme following SQLite's affinity rule:

  • INTEGER: integer, tiny_integer, small_integer, big_integer and boolean are stored as integer
  • REAL: float, double, decimal and money are stored as real
  • BLOB: blob and varbinary_blob are stored as blob
  • TEXT: all other data types are stored as text, including string, char, text, json, uuid, date, time, datetime, timestamp, etc.

To illustrate,

assert_eq!(
Table::create()
.table(Alias::new("strange"))
.col(ColumnDef::new(Alias::new("id")).integer().not_null().auto_increment().primary_key())
.col(ColumnDef::new(Alias::new("int1")).integer())
.col(ColumnDef::new(Alias::new("int2")).tiny_integer())
.col(ColumnDef::new(Alias::new("int3")).small_integer())
.col(ColumnDef::new(Alias::new("int4")).big_integer())
.col(ColumnDef::new(Alias::new("string1")).string())
.col(ColumnDef::new(Alias::new("string2")).string_len(24))
.col(ColumnDef::new(Alias::new("char1")).char())
.col(ColumnDef::new(Alias::new("char2")).char_len(24))
.col(ColumnDef::new(Alias::new("text_col")).text())
.col(ColumnDef::new(Alias::new("json_col")).json())
.col(ColumnDef::new(Alias::new("uuid_col")).uuid())
.col(ColumnDef::new(Alias::new("decimal1")).decimal())
.col(ColumnDef::new(Alias::new("decimal2")).decimal_len(12, 4))
.col(ColumnDef::new(Alias::new("money1")).money())
.col(ColumnDef::new(Alias::new("money2")).money_len(12, 4))
.col(ColumnDef::new(Alias::new("float_col")).float())
.col(ColumnDef::new(Alias::new("double_col")).double())
.col(ColumnDef::new(Alias::new("date_col")).date())
.col(ColumnDef::new(Alias::new("time_col")).time())
.col(ColumnDef::new(Alias::new("datetime_col")).date_time())
.col(ColumnDef::new(Alias::new("boolean_col")).boolean())
.col(ColumnDef::new(Alias::new("binary2")).binary_len(1024))
.col(ColumnDef::new(Alias::new("binary3")).var_binary(1024))
.to_string(SqliteQueryBuilder),
[
r#"CREATE TABLE "strange" ("#,
r#""id" integer NOT NULL PRIMARY KEY AUTOINCREMENT,"#,
r#""int1" integer,"#,
r#""int2" tinyint,"#,
r#""int3" smallint,"#,
r#""int4" bigint,"#,
r#""string1" varchar,"#,
r#""string2" varchar(24),"#,
r#""char1" char,"#,
r#""char2" char(24),"#,
r#""text_col" text,"#,
r#""json_col" json_text,"#,
r#""uuid_col" uuid_text,"#,
r#""decimal1" real,"#,
r#""decimal2" real(12, 4),"#,
r#""money1" real_money,"#,
r#""money2" real_money(12, 4),"#,
r#""float_col" float,"#,
r#""double_col" double,"#,
r#""date_col" date_text,"#,
r#""time_col" time_text,"#,
r#""datetime_col" datetime_text,"#,
r#""boolean_col" boolean,"#,
r#""binary2" blob(1024),"#,
r#""binary3" varbinary_blob(1024)"#,
r#")"#,
]
.join(" ")
);

The full type mapping table is documented here:

ColumnTypeMySQL data typePostgreSQL data typeSQLite data type
Charcharcharchar
Stringvarcharvarcharvarchar
Texttexttexttext
TinyIntegertinyintsmallinttinyint
SmallIntegersmallintsmallintsmallint
Integerintintegerinteger
BigIntegerbigintbigintinteger
TinyUnsignedtinyint unsignedsmallinttinyint
SmallUnsignedsmallint unsignedsmallintsmallint
Unsignedint unsignedintegerinteger
BigUnsignedbigint unsignedbigintinteger
Floatfloatrealfloat
Doubledoubledouble precisiondouble
Decimaldecimaldecimalreal
DateTimedatetimetimestamp without time zonedatetime_text
Timestamptimestamptimestamptimestamp_text
TimestampWithTimeZonetimestamptimestamp with time zonetimestamp_with_timezone_text
Timetimetimetime_text
Datedatedatedate_text
YearyearN/AN/A
IntervalN/AintervalN/A
Binarybinarybyteablob
VarBinaryvarbinarybyteavarbinary_blob
BitbitbitN/A
VarBitbitvarbitN/A
Booleanboolboolboolean
Moneydecimalmoneyreal_money
Jsonjsonjsonjson_text
JsonBinaryjsonjsonbjsonb_text
Uuidbinary(16)uuiduuid_text
EnumENUM(...)ENUM_NAMEenum_text
ArrayN/ADATA_TYPE[]N/A
CidrN/AcidrN/A
InetN/AinetN/A
MacAddrN/AmacaddrN/A
LTreeN/AltreeN/A

Enhancements

  • #2137 DerivePartialModel macro attribute entity now supports syn::Type
#[derive(DerivePartialModel)]
#[sea_orm(entity = "<entity::Model as ModelTrait>::Entity")]
struct EntityNameNotAIdent {
#[sea_orm(from_col = "foo2")]
_foo: i32,
#[sea_orm(from_col = "bar2")]
_bar: String,
}
  • #2146 Added RelationDef::from_alias()
assert_eq!(
cake::Entity::find()
.join_as(
JoinType::LeftJoin,
cake_filling::Relation::Cake.def().rev(),
cf.clone()
)
.join(
JoinType::LeftJoin,
cake_filling::Relation::Filling.def().from_alias(cf)
)
.build(DbBackend::MySql)
.to_string(),
[
"SELECT `cake`.`id`, `cake`.`name` FROM `cake`",
"LEFT JOIN `cake_filling` AS `cf` ON `cake`.`id` = `cf`.`cake_id`",
"LEFT JOIN `filling` ON `cf`.`filling_id` = `filling`.`id`",
]
.join(" ")
);
  • #1665 [sea-orm-macro] Qualify traits in DeriveActiveModel macro
  • #2064 [sea-orm-cli] Fix migrate generate on empty mod.rs files

Breaking Changes

  • #2145 Renamed ConnectOptions::pool_options() to ConnectOptions::sqlx_pool_options()
  • #2145 Made sqlx_common private, hiding sqlx_error_to_xxx_err
  • MySQL money type maps to decimal
  • MySQL blob types moved to extension::mysql::MySqlType; ColumnDef::blob() now takes no parameters
assert_eq!(
Table::create()
.table(BinaryType::Table)
.col(ColumnDef::new(BinaryType::BinaryLen).binary_len(32))
.col(ColumnDef::new(BinaryType::Binary).binary())
.col(ColumnDef::new(BinaryType::Blob).custom(MySqlType::Blob))
.col(ColumnDef::new(BinaryType::TinyBlob).custom(MySqlType::TinyBlob))
.col(ColumnDef::new(BinaryType::MediumBlob).custom(MySqlType::MediumBlob))
.col(ColumnDef::new(BinaryType::LongBlob).custom(MySqlType::LongBlob))
.to_string(MysqlQueryBuilder),
[
"CREATE TABLE `binary_type` (",
"`binlen` binary(32),",
"`bin` binary(1),",
"`b` blob,",
"`tb` tinyblob,",
"`mb` mediumblob,",
"`lb` longblob",
")",
]
.join(" ")
);
  • ColumnDef::binary() sets column type as binary with default length of 1
  • Removed BlobSize enum
  • Added StringLen to represent length of varchar / varbinary
/// Length for var-char/binary; default to 255
pub enum StringLen {
/// String size
N(u32),
Max,
#[default]
None,
}
  • ValueType::columntype() of Vec<u8> maps to VarBinary(StringLen::None)
  • ValueType::columntype() of String maps to String(StringLen::None)
  • ColumnType::Bit maps to bit for Postgres
  • ColumnType::Binary and ColumnType::VarBinary map to bytea for Postgres
  • Value::Decimal and Value::BigDecimal map to real for SQLite
  • ColumnType::Year(Option<MySqlYear>) changed to ColumnType::Year

Upgrades

  • Upgrade sea-query to 0.31.0-rc.3
  • Upgrade sea-schema to 0.15.0-rc.4
  • Upgrade sea-query-binder to 0.6.0-rc.1
  • #2088 Upgrade strum to 0.26

House Keeping

  • #2140 Improved Actix example to return 404 not found on unexpected inputs
  • #2154 Deprecated Actix v3 example
  • #2136 Re-enabled rocket_okapi example

Release Planning

In the previous release of SeaORM, we stated that we want our next release to be 1.0. We are indeed very close to 1.0 now!

While 0.12 will still be maintained before 1.0 get finalized, you are welcome to try out 1.0-rc.x today! There will still be a few minor but still technically breaking changes:

  1. #2185 Adding trait const ARITY to PrimaryKeyTrait, allowing users to write better generic code
  2. #2186 Associating ActiveModel to EntityTrait, allowing users to extend the behaviour of Entities

Now is also the perfect time for you to propose breaking changes that'd have long term impact to SeaORM. After the stablization, we hope that SeaORM can offer a stable API surface that developers can use in production for the years to come.

We'd not have more than 2 major releases in a year, and each major release will be maintained for at least 1 year. It's still tentative, but that's what we have in mind for now. Moreoever, it will actually allow us to ship new features more frequently!

SQL Server Support

We've been planning SQL Server for SeaORM for a while, but it was put aside in 2023 (which I regretted). Anyway SQL Server support is coming soon! It will first be offered as a closed beta to our partners. If you are interested, please join our waiting list.

If you feel generous, a small donation will be greatly appreciated, and goes a long way towards sustaining the organization.

A big shout out to our sponsors 😇:

Gold Sponsors

GitHub Sponsors

Afonso Barracha
Shane Sveller
Dean Sheather
Marcus Buffett
René Klačan
Apinan I.
Kentaro Tanaka
Natsuki Ikeguchi
Marlon Mueller-Soppart
ul
MasakiMiyazaki
Manfred Lee
KallyDev
Daniel Gallups
Caido
Coolpany SE

Rustacean Sticker Pack 🦀

The Rustacean Sticker Pack is the perfect way to express your passion for Rust. Our stickers are made with a premium water-resistant vinyl with a unique matte finish. Stick them on your laptop, notebook, or any gadget to show off your love for Rust!

Moreover, all proceeds contributes directly to the ongoing development of SeaQL projects.

Sticker Pack Contents:

  • Logo of SeaQL projects: SeaQL, SeaORM, SeaQuery, Seaography, FireDBG
  • Mascot of SeaQL: Terres the Hermit Crab
  • Mascot of Rust: Ferris the Crab
  • The Rustacean word

Support SeaQL and get a Sticker Pack!

Rustacean Sticker Pack by SeaQL

· 7 min read
SeaQL Team
SeaORM 0.12 Banner

It had been a while since the initial SeaORM 0.12 release. This blog post summarizes the new features and enhancements introduced in SeaORM 0.12.2 through 0.12.12!

Celebrating 2M downloads on crates.io 📦

We've just reached the milestone of 2,000,000 all time downloads on crates.io. It's a testament to SeaORM's adoption in professional use. Thank you to all our users for your trust and for being a part of our community.

New Features

Entity format update

  • #1898 Add support for root JSON arrays (requires the json-array / postgres-array feature)! It involved an intricate type system refactor to work around the orphan rule.
#[derive(Clone, Debug, PartialEq, Eq, DeriveEntityModel)]
#[sea_orm(table_name = "json_struct_vec")]
pub struct Model {
#[sea_orm(primary_key)]
pub id: i32,
#[sea_orm(column_type = "Json")]
pub struct_vec: Vec<JsonColumn>,
}

#[derive(Clone, Debug, PartialEq, Eq, Serialize, Deserialize, FromJsonQueryResult)]
pub struct JsonColumn {
pub value: String,
}
  • #2009 Added comment attribute for Entity; create_table_from_entity now supports comment on MySQL
#[derive(Clone, Debug, PartialEq, Eq, DeriveEntityModel)]
#[sea_orm(table_name = "applog", comment = "app logs")]
pub struct Model {
#[sea_orm(primary_key, comment = "ID")]
pub id: i32,
#[sea_orm(comment = "action")]
pub action: String,
pub json: Json,
pub created_at: DateTimeWithTimeZone,
}

Cursor paginator improvements

  • #2037 Added descending order to Cursor:
// (default behaviour) Before 5 ASC, i.e. id < 5

let mut cursor = Entity::find().cursor_by(Column::Id);
cursor.before(5);

assert_eq!(
cursor.first(4).all(db).await?,
[
Model { id: 1 },
Model { id: 2 },
Model { id: 3 },
Model { id: 4 },
]
);

// (new API) After 5 DESC, i.e. id < 5

let mut cursor = Entity::find().cursor_by(Column::Id);
cursor.after(5).desc();

assert_eq!(
cursor.first(4).all(db).await?,
[
Model { id: 4 },
Model { id: 3 },
Model { id: 2 },
Model { id: 1 },
]
);
  • #1826 Added cursor support to SelectTwo:
// Join with linked relation; cursor by first table's id

cake::Entity::find()
.find_also_linked(entity_linked::CakeToFillingVendor)
.cursor_by(cake::Column::Id)
.before(10)
.first(2)
.all(&db)
.await?

// Join with relation; cursor by the 2nd table's id

cake::Entity::find()
.find_also_related(Fruit)
.cursor_by_other(fruit::Column::Id)
.before(10)
.first(2)
.all(&db)
.await?

Added "proxy" to database backend

#1881, #2000 Added "proxy" to database backend (requires feature flag proxy).

It enables the possibility of using SeaORM on edge / client-side! See the GlueSQL demo for an example.

Enhancements

  • #1954 [sea-orm-macro] Added #[sea_orm(skip)] to FromQueryResult derive macro
#[derive(Clone, Debug, PartialEq, Eq, Deserialize, Serialize, FromQueryResult)]
pub struct PublicUser {
pub id: i64,
pub name: String,
#[serde(skip_serializing_if = "Vec::is_empty")]
#[sea_orm(skip)]
pub something: Something,
}
  • #1598 [sea-orm-macro] Added support for Postgres arrays in FromQueryResult impl of JsonValue
// existing API:

assert_eq!(
Entity::find_by_id(1).one(db).await?,
Some(Model {
id: 1,
name: "Collection 1".into(),
integers: vec![1, 2, 3],
teas: vec![Tea::BreakfastTea],
colors: vec![Color::Black],
})
);

// new API:

assert_eq!(
Entity::find_by_id(1).into_json().one(db).await?,
Some(json!({
"id": 1,
"name": "Collection 1",
"integers": [1, 2, 3],
"teas": ["BreakfastTea"],
"colors": [0],
}))
);
  • #1828 [sea-orm-migration] Check if an index exists
use sea_orm_migration::prelude::*;
#[derive(DeriveMigrationName)]
pub struct Migration;
#[async_trait::async_trait]
impl MigrationTrait for Migration {
async fn up(&self, manager: &SchemaManager) -> Result<(), DbErr> {
// ...

// Make sure the index haven't been created
assert!(!manager.has_index("cake", "cake_name_index").await?);

manager
.create_index(
Index::create()
.name("cake_name_index")
.table(Cake::Table)
.col(Cake::Name)
.to_owned(),
)
.await?;

Ok(())
}

async fn down(&self, manager: &SchemaManager) -> Result<(), DbErr> {
// ...
}
}
  • #2030 Improve query performance of Paginator's COUNT query
  • #2055 Added SQLx slow statements logging to ConnectOptions
  • #1867 Added QuerySelect::lock_with_behavior
  • #2002 Cast enums in is_in and is_not_in
  • #1999 Add source annotations to errors
  • #1960 Implement StatementBuilder for sea_query::WithQuery
  • #1979 Added method expr_as_ that accepts self
  • #1868 Loader: use ValueTuple as hash key
  • #1934 [sea-orm-cli] Added --enum-extra-derives
  • #1952 [sea-orm-cli] Added --enum-extra-attributes
  • #1693 [sea-orm-cli] Support generation of related entity with composite foreign key

Bug fixes

  • #1855, #2054 [sea-orm-macro] Qualify types in DeriveValueType macro
  • #1953 [sea-orm-cli] Fix duplicated active enum use statements on generated entities
  • #1821 [sea-orm-cli] Fix entity generation for non-alphanumeric enum variants
  • #2071 [sea-orm-cli] Fix entity generation for relations with composite keys
  • #1800 Fixed find_with_related consolidation logic
  • 5a6acd67 Fixed Loader panic on empty inputs

Upgrades

  • #1984 Upgraded axum example to 0.7
  • #1858 Upgraded chrono to 0.4.30
  • #1959 Upgraded rocket to 0.5.0
  • Upgraded sea-query to 0.30.5
  • Upgraded sea-schema to 0.14.2
  • Upgraded salvo to 0.50

House Keeping

  • #2057 Fix clippy warnings on 1.75
  • #1811 Added test cases for find_xxx_related/linked

Release planning

In the announcement blog post of SeaORM 0.12, we stated we want to reduce the frequency of breaking releases while maintaining the pace for feature updates and enhancements. I am glad to say we've accomplished that!

There are still a few breaking changes planned for the next major release. After some discussions and consideration, we decided that the next major release will be a release candidate for 1.0!

If you feel generous, a small donation will be greatly appreciated, and goes a long way towards sustaining the organization.

A big shout out to our sponsors 😇:

Gold Sponsors

GitHub Sponsors

Émile Fugulin
Afonso Barracha
Shane Sveller
Dean Sheather
Marcus Buffett
René Klačan
IceApinan
Jacob Trueb
Kentaro Tanaka
Natsuki Ikeguchi
Marlon Mueller-Soppart
ul
Manfred Lee
KallyDev
Daniel Gallups
Coolpany-SE

Rustacean Sticker Pack 🦀

The Rustacean Sticker Pack is the perfect way to express your passion for Rust. Our stickers are made with a premium water-resistant vinyl with a unique matte finish. Stick them on your laptop, notebook, or any gadget to show off your love for Rust!

Moreover, all proceeds contributes directly to the ongoing development of SeaQL projects.

Sticker Pack Contents:

  • Logo of SeaQL projects: SeaQL, SeaORM, SeaQuery, Seaography, FireDBG
  • Mascot of SeaQL: Terres the Hermit Crab
  • Mascot of Rust: Ferris the Crab
  • The Rustacean word

Support SeaQL and get a Sticker Pack!

Rustacean Sticker Pack by SeaQL

· 10 min read
Billy Chan

524 members of the SeaQL community from 41 countries kindly contributed their thoughts on using SeaQL libraries, learning Rust and employing Rust in their day to day development lives. From these responses we hope to get an understanding of where the SeaQL and Rust community stands in 2023.

This is our first community survey, we will conduct the survey annually to keep track of how the community evolves over time.

Demographics

Q. Where are you located in?

Participants are from 41 countries across the world!

Other: ArgentinaAustraliaAustriaBelarusBelgiumCyprusCzechiaDenmarkHungaryIranIrelandItalyJapanKazakstanKoreaMongoliaNigeriaNorwayPeruPolandSlovakiaSouth AfricaSpainSwedenTaiwan ThailandTurkeyUkraine

Use of SeaQL Libraries

Q. Are you using SeaQL libraries in building a project?

Q. Which SeaQL libraries are you using in building a project?

Other: SeaographySeaStreamer

Q. Are you using SeaQL libraries in a personal, academic or professional context?

Q. Why did you choose SeaQL libraries?

Other: Async support, future proof and good documentationGood Query PerformanceIt was recommended on websites and YouTubeDoes not use SQL for migrationsBeginner-friendly and easy to get startedEasy to translate from Eloquent ORM knowledgeCan drop in to SeaQuery if necessaryI started with SQLx, then tried SeaQueryI found good examples on YouTube

Q. What qualities of SeaQL libraries do you think are important?

Other: Simple SyntaxBeing able to easily express what you would otherwise be able to write in pure SQLMigration and entity generationClarify of the implementation and usage patternsEfficient query building especially with relations and joinsErgonomic API

Team & Project Nature

Q. How many team members (including you) are working on the project?

Q. Can you categorize the nature of the project?

Other: ForecastingFinancial tradingEnterprise Resource Planning (ERP)FintechCloud infrstructure automationBackend for desktop, mobile and web application

Tech Stack

Q. What is your development environment?

Linux Breakdown

Windows Breakdown

macOS Breakdown

Q. Which database(s) do you use?

Q. Which web framework are you using?

Q. What is the deployment environment?

Rust at Work

Q. Are you using Rust at work?

Q. Which industry your company is in?

Vague description of the company

A banking companyA business to business lending platformA cloud StorageA consulting companyA cybersecurity management platformAn IT solution companyAn E-Commerce clothing storeA children entertainmets companyA factory construction management platformA fintech startupA geology technology companyA publicly traded health-tech companyA private restaurant chainAn industrial IoT for heating and water distributionsAn internet providerA nonprofit tech research organizationA payment service providerA road intelligence companyA SaaS startupA server hosting providerA DevOps platform that helps our users scale their Kubernetes applicationAn Automotive company

Q. What is the size of your company?

Q. How many engineers in your company are dedicated to writing Rust?

Q. Which layer(s) of the technology stack are using Rust?

Learning Rust

Q. Are you learning / new to Rust?

Q. Which language(s) are you most familiar with?

Q. Are you familiar with SQL?

Q. Do you find Rust easy or hard to learn?

Q. What motivates you to learn Rust?

Other: Ability to develop fast, secure and standalone API driven toolsEfficiency, safety, low resource usageGood design decisions from the startReliability and ease of developmentSchool makes me to learnRust is too coolThe ecosystem of libraries + general competence of lib authorsIt is the most loved languageThe guarantees Rust providesLearning something newType safety and speedWant to get away from NULLNo boilerplate, if you do not want itPerformance

Q. What learning resources do you rely on?

Other: YouTubeOnline CoursesChatGPT

Q. What is your first project built using Rust?

Other: ChatbotScraperRasterization of the mandelbrot setIoTLibrary

What's Next

Q. Which aspects do you want to see advancement on SeaORM?

Thank you for all the suggestions, we will certainly take them into account!

Other: Full MySQL coverageMS SQL Server supportStructured queries for complex joinsA stable releaseData seedingMigrations based on Entity diffsType safetySupport tables without primary keyTurso integrationFetching nested structuresViews

Q. What tools would you be interested in using, if developed first-party by SeaQL?

Other: An API integration testing utilityAn oso-based authorization integrationA visual tool for managing migrationsDatabase layout editor (like dbdiagram.io)

Share Your Thoughts

Q. Anything else you want to say?

Didn't expect this section to turn into a testimonial, thank you for all the kind words :)

Good job yall

Great projects, thanks for your hard work

I expect it to be an asynchronous type-safe library. Keep up the good work!

I'd like to see entity generation without a database

The website, support from JetBrains, the documentation and the release cycle are very nice!

I'm very interested in how SeaORM will continue evolving and I would like to wish you the best of luck!

I've found SeaORM very useful and I'm very grateful to the development team for creating and maintaining it!

In TypeORM I can write entities and then generate migration from them. It's very handy. It helps to increase development speed. It would be nice to have this functionality in SeaORM.

It needs to have better integration with SeaQuery, I sometimes need to get to it because not all features are available in SeaORM which makes it a pain.

Keep the good work!

Keep going! Love SeaORM!

Keep up the great work. Rust needs a fast, ergonomic and reliable ORM.

SeaORM is very powerful, but the rust docs and tutorial examples could be more fleshed out.

SeaORM is an awesome library. Most things are quite concise and therefore straightforward. Simply a few edge cases concerning DB specific types and values could be better.

The trait system is too complex and coding experience is not pretty well with that.

Automatic migration generation would make the library pretty much perfect in my opinion.

SeaQL tutorials could be better. Much more detailed explanation and definitely it has to have best practices section for Design Patterns like and good best practices related with clean architecture.

SeaQL are great products and it’s very enjoyable using them

Thank you <3

Thank you for awesome library!

Thank you for this wonderful project. I feel the documentation lacks examples for small functions and usage of some obscure features.

Thank you for your hard work!

Thank you for your work on SeaQL, your efforts are appreciated

Thank you for your work, we are seeking actively to include SeaORM in our projects

Thank you very much for your work!

Thanks a lot for the amazing work you guys put into this set of libraries. This is an amazing development for the rust ecosystem.

Thanks and keep up the good work.

Thanks for a great tool!

Thanks for all the amazing work.

Thanks for making SeaORM!

The project I am doing for work is only a prototype, it's a new implementation of a current Python forecasting project which uses a pandas and a custom psycopg2 orm. My intent is to create a faster/dev friendly version with SeaORM and Polars. I am hoping to eventually get a prototype I can display to my team to get a go ahead to fully develop a new version, and to migrate 4-5 other forecasting apps using shared libraries for io and calculations.

I have also been using SeaORM for a small API client for financial data, which I may make open source.

I think one thing which could really improve SeaORM is some more advance examples in the documentation section. The docs are really detailed as far as rust documentation goes.

Very promising project, keep it up.

Thank you so much for taking it upon yourselves to selflessly give your free time. It probably doesn't matter much, but thank you so much for your work. SeaORM is a fantastic tool that I can see myself using for a long time to come. I hope to make contributions in any form when I am under better circumstances :3 Kudos to the team!

你们的库非常的棒,至少我觉得比Diesel好太多了,入门简单,对新手非常友好,这是最大的亮点,其次是库貌似可以实现很复杂的Join SQL逻辑而不用写原生的SQL,这点也是非常值得点赞的,但是在这块的文档貌似写的有点简略了,希望可以丰富一下文档内容,对于复杂查询的说明可以更加详细一些,这样就再好不过了。谢谢你们,我会持续关注你们,未来的项目如果涉及ORM,那绝对非你们莫属了!

Rustacean Sticker Pack 🦀

The Rustacean Sticker Pack is the perfect way to express your passion for Rust. Our stickers are made with a premium water-resistant vinyl with a unique matte finish. Stick them on your laptop, notebook, or any gadget to show off your love for Rust!

Moreover, all proceeds contributes directly to the ongoing development of SeaQL projects.

Sticker Pack Contents:

  • Logo of SeaQL projects: SeaQL, SeaORM, SeaQuery, Seaography, FireDBG
  • Mascot of SeaQL: Terres the Hermit Crab
  • Mascot of Rust: Ferris the Crab
  • The Rustacean word

Support SeaQL and get a Sticker Pack!

Rustacean Sticker Pack by SeaQL

· One min read
SeaQL Team

It is our honour to have been awarded by OpenUK for the 2023 Award in the Software category! The award ceremony was a very memorable experience. A huge thanks to Red Badger who sponsored the software award.

In 2023, we released SeaStreamer, two major versions of SeaORM, a new version of Seaography, and have been busy working on a new project on the side.

We reached the milestone of 5k GitHub stars and 2M crates.io downloads mid-year.

In the summer, we took in two interns for our 3rd summer of code.

We plan to offer internships tailored to UK students in 2024 through university internship programs. As always, we welcome contributors from all over the world, and may be we will enrol on GSoC 2024 again. (but open-source is not bounded any schedule, so you can start contributing anytime)

A big thanks to our sponsors who continued to support us, and we look forward to a more impactful 2024.

· 4 min read
Chris Tsang

If you are writing an async application in Rust, at some point you'd want to separate the code into several crates. There are some benefits:

  1. Better encapsulation. Having a crate boundary between sub-systems can lead to cleaner code and a more well-defined API. No more pub(crate)!
  2. Faster compilation. By breaking down a big crate into several independent small crates, they can be compiled concurrently.

But the question is, if you are using only one async runtime anyway, what are the benefits of writing async-runtime-generic libraries?

  1. Portability. You can easily switch to a different async runtime, or wasm.
  2. Correctness. Testing a library against both tokio and async-std can uncover more bugs, including concurrency bugs (due to fuzzy task execution orders) and "undefined behaviour" either due to misunderstanding or async-runtime implementation details

So now you've decided to write async-runtime-generic libraries! Here I want to share 3 strategies along with examples found in the Rust ecosystem.

Approach 1: Defining your own AsyncRuntime trait

Using the futures crate you can write very generic library code, but there is one missing piece: time - to sleep or timeout, you have to rely on an async runtime. If that's all you need, you can define your own AsyncRuntime trait and requires downstream to implement it. This is the approach used by rdkafka:

pub trait AsyncRuntime: Send + Sync + 'static {
type Delay: Future<Output = ()> + Send;

/// It basically means the return value must be a `Future`
fn sleep(duration: Duration) -> Self::Delay;
}

Here is how it's implemented:

impl AsyncRuntime for TokioRuntime {
type Delay = tokio::time::Sleep;

fn sleep(duration: Duration) -> Self::Delay {
tokio::time::sleep(duration)
}
}

Library code to use the above:

async fn operation<R: AsyncRuntime>() {
R::sleep(Duration::from_millis(1)).await;
}

Approach 2: Abstract the async runtimes internally and expose feature flags

This is the approach used by redis-rs.

To work with network connections or file handle, you can use the AsyncRead / AsyncWrite traits:

#[async_trait]
pub(crate) trait AsyncRuntime: Send + Sync + 'static {
type Connection: AsyncRead + AsyncWrite + Send + Sync + 'static;

async fn connect(addr: SocketAddr) -> std::io::Result<Self::Connection>;
}

Then you'll define a module for each async runtime:

#[cfg(feature = "runtime-async-std")]
mod async_std_impl;
#[cfg(feature = "runtime-async-std")]
use async_std_impl::*;

#[cfg(feature = "runtime-tokio")]
mod tokio_impl;
#[cfg(feature = "runtime-tokio")]
use tokio_impl::*;

Where each module would look like:

tokio_impl.rs
#[async_trait]
impl AsyncRuntime for TokioRuntime {
type Connection = tokio::net::TcpStream;

async fn connect(addr: SocketAddr) -> std::io::Result<Self::Connection> {
tokio::net::TcpStream::connect(addr).await
}
}

Library code to use the above:

async fn operation<R: AsyncRuntime>(conn: R::Connection) {
conn.write(b"some bytes").await;
}

Approach 3: Maintain an async runtime abstraction crate

This is the approach used by SQLx and SeaStreamer.

Basically, aggregate all async runtime APIs you'd use and write a wrapper library. This may be tedious, but this also has the benefit of specifying all interactions with the async runtime in one place for your project, which could be handy for debugging or tracing.

For example, async Task handling:

common-async-runtime/tokio_task.rs
pub use tokio::task::{JoinHandle as TaskHandle};

pub fn spawn_task<F, T>(future: F) -> TaskHandle<T>
where
F: Future<Output = T> + Send + 'static,
T: Send + 'static,
{
tokio::task::spawn(future)
}

async-std's task API is slightly different (in tokio the output is Result<T, JoinError>), which requires some boilerplate:

common-async-runtime/async_std_task.rs
/// A shim to match tokio's API
pub struct TaskHandle<T>(async_std::task::JoinHandle<T>);

pub fn spawn_task<F, T>(future: F) -> TaskHandle<T>
where
F: Future<Output = T> + Send + 'static,
T: Send + 'static,
{
TaskHandle(async_std::task::spawn(future))
}

#[derive(Debug)]
pub struct JoinError;

impl std::error::Error for JoinError {}

// This is basically how you wrap a `Future`
impl<T> Future for TaskHandle<T> {
type Output = Result<T, JoinError>;

fn poll(
mut self: std::pin::Pin<&mut Self>,
cx: &mut std::task::Context<'_>,
) -> std::task::Poll<Self::Output> {
match self.0.poll_unpin(cx) {
std::task::Poll::Ready(res) => std::task::Poll::Ready(Ok(res)),
std::task::Poll::Pending => std::task::Poll::Pending,
}
}
}

In the library's Cargo.toml, you can simply include common-async-runtime as dependency. This makes your library code 'pure', because now selecting an async runtime is controlled by downstream. Similar to approach 1, this crate can be compiled without any async runtime, which is neat!

Conclusion

Happy hacking! Welcome to share your experience with the community.

· 5 min read
Chris Tsang

🎉 We are pleased to release SeaStreamer 0.3.x!

File Backend

A major addition in SeaStreamer 0.3 is the file backend. It implements the same high-level MPMC API, enabling streaming to and from files. There are different use cases. For example, it can be used to dump data from Redis / Kafka and process them locally, or as an intermediate file format for storage or transport.

The SeaStreamer File format, .ss is pretty simple. It's very much like .ndjson, but binary. The file format is designed with the following goals:

  1. Binary data support without encoding overheads
  2. Efficiency in rewinding / seeking through a large dump
  3. Streaming-friendliness - File can be truncated without losing integrity

Let me explain in details.

First of all, SeaStreamer File is a container format. It only concerns the message stream and framing, not the payload. It's designed to be paired with a binary message format like Protobuf or BSON.

Encode-free

JSON and CSV are great plain text file formats, but they are not binary friendly. Usually, to encode binary data, one would use base64. It therefore imposes an expensive encoding / decoding overhead. In a binary protocol, delimiters are frequently used to signal message boundaries. As a consequence, byte stuffing is needed to escape the bytes.

In SeaStreamer, we want to avoid the encoding overhead entirely. The payload should be written to disk verbatim. So the file format revolves around constructing message frames and placing checksums to ensure that data is interpreted correctly.

Efficient seek

A delimiter-based protocol has an advantage: the byte stream can be randomly sought, and we always have no trouble reading the next message.

Since SeaStreamer does not rely on delimiters, we can't easily align to message frames after a random seek. We solve this problem by placing beacons in a regular interval at fixed locations throughout the file. E.g. say the beacon interval is 1024, there will be a beacon at the 1024th byte, the 2048th, and so on. Then, every time we want to seek to a random location, we'd seek to the closest N * 1024 byte and read from there.

These beacons also double as indices: they contain summaries of the individual streams. So given a particular stream key and sequence number (or timestamp) to search for, SeaStreamer can quickly locate the message just by reading the beacons. It doesn't matter if the stream's messages are sparse!

Streaming-friendliness

It should always be safe to truncate files. It should be relatively easy to split a file into chunks. We should be able to tell if the data is corrupted.

SeaStreamer achieves this by computing a checksum for every message, and also the running checksum of the checksums for each stream. It's not enforced right now, but in theory we can detect if any messages are missing from a stream.

Summary

This file format is also easy to implement in different languages, as we just made an (experimental) reader in Typescript.

That's it! If you are interested, you can go and take a look at the format description.

Redis Backend

Redis Streams are underrated! They have high throughput and concurrency, and are best suited for non-persistent stream processing near or on the same host as the application.

The obstacle is probably in library support. Redis Streams' API is rather low level, and there aren't many high-level libraries to help with programming, as opposed to Kafka, which has versatile official programming libraries.

The pitfall is, it's not easy to maximize concurrency with the raw Redis API. To start, you'd need to pipeline XADD commands. You'd also need to time and batch XACKs so that it does not block reads and computation. And of course you want to separate the reads and writes on different threads.

SeaStreamer breaks these obstacles for you and offers a Kafka-like API experience!

Benchmark

In 0.3, we have done some optimizations to improve the throughput of the Redis and File backend. We set our initial benchmark at 100k messages per second, which hopefully we can further improve over time.

Our micro benchmark involves a simple program producing or consuming 100k messages, where each message has a payload of 256 bytes.

For Redis, it's running on the same computer in Docker. On my not-very-impressive laptop with a 10th Gen Intel Core i7, the numbers are somewhat around:

Producer

redis    0.5s
stdio 0.5s
file 0.5s

Consumer

redis    1.0s
stdio 1.0s
file 1.1s

It practically means that we are comfortably in the realm of producing 100k messages per second, but are just about able to consume 100k messages in 1 second. Suggestions to performance improvements are welcome!

Community

SeaQL.org is an independent open-source organization run by passionate ️developers. If you like our projects, please star ⭐ and share our repositories. If you feel generous, a small donation via GitHub Sponsor will be greatly appreciated, and goes a long way towards sustaining the organization 🚢.

SeaStreamer is a community driven project. We welcome you to participate, contribute and together build for Rust's future 🦀.

· 8 min read
SeaQL Team
SeaORM 0.12 Banner

🎉 We are pleased to announce SeaORM 0.12 today!

We still remember the time when we first introduced SeaORM to the Rust community two years ago. We set out a goal to enable developers to build asynchronous database-driven applications in Rust.

Today, many open-source projects, a handful of startups and many more closed-source projects are using SeaORM. Thank you all who participated and contributed in the making!

SeaORM Star History

New Features 🌟

🧭 Seaography: GraphQL integration (preview)

Seaography example

Seaography is a GraphQL framework built on top of SeaORM. In 0.12, Seaography integration is built into sea-orm. Seaography allows you to build GraphQL resolvers quickly. With just a few commands, you can launch a GraphQL server from SeaORM entities!

While Seaography development is still in an early stage, it is especially useful in prototyping and building internal-use admin panels.

Read the documentation to learn more.

Added macro DerivePartialModel

#1597 Now you can easily perform custom select to query only the columns you needed

#[derive(DerivePartialModel, FromQueryResult)]
#[sea_orm(entity = "Cake")]
struct PartialCake {
name: String,
#[sea_orm(
from_expr = r#"SimpleExpr::FunctionCall(Func::upper(Expr::col((Cake, cake::Column::Name))))"#
)]
name_upper: String,
}

assert_eq!(
cake::Entity::find()
.into_partial_model::<PartialCake>()
.into_statement(DbBackend::Sqlite)
.to_string(),
r#"SELECT "cake"."name", UPPER("cake"."name") AS "name_upper" FROM "cake""#
);

Added Select::find_with_linked

#1728, #1743 Similar to find_with_related, you can now select related entities and consolidate the models.

// Consider the following link
pub struct BakedForCustomer;

impl Linked for BakedForCustomer {
type FromEntity = Entity;

type ToEntity = super::customer::Entity;

fn link(&self) -> Vec<RelationDef> {
vec![
super::cakes_bakers::Relation::Baker.def().rev(),
super::cakes_bakers::Relation::Cake.def(),
super::lineitem::Relation::Cake.def().rev(),
super::lineitem::Relation::Order.def(),
super::order::Relation::Customer.def(),
]
}
}

let res: Vec<(baker::Model, Vec<customer::Model>)> = Baker::find()
.find_with_linked(baker::BakedForCustomer)
.order_by_asc(baker::Column::Id)
.all(db)
.await?

Added DeriveValueType derive macro for custom wrapper types

#1720 So now you can use newtypes easily.

#[derive(Clone, Debug, PartialEq, Eq, DeriveEntityModel)]
#[sea_orm(table_name = "custom_value_type")]
pub struct Model {
#[sea_orm(primary_key)]
pub id: i32,
pub number: Integer,
// Postgres only
pub str_vec: StringVec,
}

#[derive(Clone, Debug, PartialEq, Eq, DeriveValueType)]
pub struct Integer(i32);

#[derive(Clone, Debug, PartialEq, Eq, DeriveValueType)]
pub struct StringVec(pub Vec<String>);

Which saves you the boilerplate of:

impl std::convert::From<StringVec> for Value { .. }

impl TryGetable for StringVec {
fn try_get_by<I: ColIdx>(res: &QueryResult, idx: I)
-> Result<Self, TryGetError> { .. }
}

impl ValueType for StringVec {
fn try_from(v: Value) -> Result<Self, ValueTypeErr> { .. }

fn type_name() -> String { "StringVec".to_owned() }

fn array_type() -> ArrayType { ArrayType::String }

fn column_type() -> ColumnType { ColumnType::String(None) }
}

Enhancements 🆙

#1433 Chained AND / OR join ON condition

Added more macro attributes to DeriveRelation

// Entity file

#[derive(Copy, Clone, Debug, EnumIter, DeriveRelation)]
pub enum Relation {
// By default, it's `JOIN `fruit` ON `cake`.`id` = `fruit`.`cake_id` AND `fruit`.`name` LIKE '%tropical%'`
#[sea_orm(
has_many = "super::fruit::Entity",
on_condition = r#"super::fruit::Column::Name.like("%tropical%")"#
)]
TropicalFruit,
// Specify `condition_type = "any"` to override it, now it becomes
// `JOIN `fruit` ON `cake`.`id` = `fruit`.`cake_id` OR `fruit`.`name` LIKE '%tropical%'`
#[sea_orm(
has_many = "super::fruit::Entity",
on_condition = r#"super::fruit::Column::Name.like("%tropical%")"#
condition_type = "any",
)]
OrTropicalFruit,
}

#1508 Supports entity with composite primary key of arity 12

#[derive(Clone, Debug, PartialEq, DeriveEntityModel)]
#[sea_orm(table_name = "primary_key_of_12")]
pub struct Model {
#[sea_orm(primary_key, auto_increment = false)]
pub id_1: String,
...
#[sea_orm(primary_key, auto_increment = false)]
pub id_12: bool,
}

#1677 Added UpdateMany::exec_with_returning()

let models: Vec<Model> = Entity::update_many()
.col_expr(Column::Values, Expr::expr(..))
.exec_with_returning(db)
.await?;

#1511 Added MigratorTrait::migration_table_name() method to configure the name of migration table

#[async_trait::async_trait]
impl MigratorTrait for Migrator {
// Override the name of migration table
fn migration_table_name() -> sea_orm::DynIden {
Alias::new("override_migration_table_name").into_iden()
}
...
}

#1707 Added DbErr::sql_err() method to parse common database errors

assert!(matches!(
cake.into_active_model().insert(db).await
.expect_err("Insert a row with duplicated primary key")
.sql_err(),
Some(SqlErr::UniqueConstraintViolation(_))
));

assert!(matches!(
fk_cake.insert(db).await
.expect_err("Insert a row with invalid foreign key")
.sql_err(),
Some(SqlErr::ForeignKeyConstraintViolation(_))
));

#1737 Introduced new ConnAcquireErr

enum DbErr {
ConnectionAcquire(ConnAcquireErr),
..
}

enum ConnAcquireErr {
Timeout,
ConnectionClosed,
}

#1627 Added DatabaseConnection::ping()

|db: DatabaseConnection| {
assert!(db.ping().await.is_ok());
db.clone().close().await;
assert!(matches!(db.ping().await, Err(DbErr::ConnectionAcquire)));
}

#1708 Added TryInsert that does not panic on empty inserts

// now, you can do:
let res = Bakery::insert_many(std::iter::empty())
.on_empty_do_nothing()
.exec(db)
.await;

assert!(matches!(res, Ok(TryInsertResult::Empty)));

#1712 Insert on conflict do nothing to return Ok

let on = OnConflict::column(Column::Id).do_nothing().to_owned();

// Existing behaviour
let res = Entity::insert_many([..]).on_conflict(on).exec(db).await;
assert!(matches!(res, Err(DbErr::RecordNotInserted)));

// New API; now you can:
let res =
Entity::insert_many([..]).on_conflict(on).do_nothing().exec(db).await;
assert!(matches!(res, Ok(TryInsertResult::Conflicted)));

#1740, #1755 Replacing sea_query::Iden with sea_orm::DeriveIden

To provide a more consistent interface, sea-query/derive is no longer enabled by sea-orm, as such, Iden no longer works as a derive macro (it's still a trait).

// then:

#[derive(Iden)]
#[iden = "category"]
pub struct CategoryEnum;

#[derive(Iden)]
pub enum Tea {
Table,
#[iden = "AfternoonTea"]
EverydayTea,
}

// now:

#[derive(DeriveIden)]
#[sea_orm(iden = "category")]
pub struct CategoryEnum;

#[derive(DeriveIden)]
pub enum Tea {
Table,
#[sea_orm(iden = "AfternoonTea")]
EverydayTea,
}

New Release Train Ferry 🚢

It's been the 12th release of SeaORM! Initially, a major version was released every month. It gradually became 2 to 3 months, and now, it's been 6 months since the last major release. As our userbase grew and some are already using SeaORM in production, we understand the importance of having a stable API surface and feature set.

That's why we are committed to:

  1. Reviewing breaking changes with strict scrutiny
  2. Expanding our test suite to cover all features of our library
  3. Never remove features, and consider deprecation carefully

Today, the architecture of SeaORM is pretty solid and stable, and with the 0.12 release where we paid back a lot of technical debt, we will be able to deliver new features and enhancements without breaking. As our major dependency SQLx is not 1.0 yet, technically we cannot be 1.0.

We are still advancing rapidly, and we will always make a new release as soon as SQLx makes a new release, so that you can upgrade everything at once. As a result, the next major release of SeaORM will come out 6 months from now, or when SQLx makes a new release, whichever is earlier.

Community Survey 📝

SeaQL is an independent open-source organization. Our goal is to enable developers to build data intensive applications in Rust. If you are using SeaORM, please participate in the SeaQL Community Survey!

By completing this survey, you will help us gather insights into how you, the developer, are using our libraries and identify means to improve your developer experience. We will also publish an annual survey report to summarize our findings.

If you are a happy user of SeaORM, consider writing us a testimonial!

A big thank to DigitalOcean who sponsored our server hosting, and JetBrains who sponsored our IDE, and every sponsor on GitHub Sponsor!

If you feel generous, a small donation will be greatly appreciated, and goes a long way towards sustaining the organization.

A big shout out to our sponsors 😇:

Shane Sveller
Émile Fugulin
Afonso Barracha
Jacob Trueb
Natsuki Ikeguchi
Marlon Mueller-Soppart
KallyDev
Dean Sheather
Manfred Lee
Roland Gorácz
IceApinan
René Klačan
Unnamed Sponsor

What's Next for SeaORM? ⛵

Open-source project is a never-ending work, and we are actively looking for ways to sustain the project. You can support our endeavour by starring & sharing our repositories and becoming a sponsor.

We are considering multiple directions to generate revenue for the organization. If you have any suggestion, or want to join or collaborate with us, please contact us via hello[at]sea-ql.org.

Thank you for your support, and together we can make open-source sustainable.

· 5 min read
Chris Tsang

We are pleased to introduce SeaStreamer to the Rust community today. SeaStreamer is a stream processing toolkit to help you build stream processors in Rust.

At SeaQL we want to make Rust the best programming platform for data engineering. Where SeaORM is the essential tool for working with SQL databases, SeaStreamer aims to be your essential toolkit for working with streams.

Currently SeaStreamer provides integration with Kafka and Redis.

Let's have a quick tour of SeaStreamer.

High level async API

  • High level async API that supports both async-std and tokio
  • Mutex-free implementation1: concurrency achieved by message passing
  • A comprehensive type system that guides/restricts you with the API

Below is a basic Kafka consumer:

#[tokio::main]
async fn main() -> Result<()> {
env_logger::init();

let stream: StreamUrl = "kafka://streamer.sea-ql.org:9092/my_stream".parse()?;
let streamer = KafkaStreamer::connect(stream.streamer(), Default::default()).await?;
let mut options = KafkaConsumerOptions::new(ConsumerMode::RealTime);
options.set_auto_offset_reset(AutoOffsetReset::Earliest);
let consumer = streamer
.create_consumer(stream.stream_keys(), options)
.await?;

loop {
let mess = consumer.next().await?;
println!("{}", mess.message().as_str()?);
}
}

Consumer::stream() returns an object that implements the Stream trait, which allows you to do neat things:

let items = consumer
.stream()
.take(num)
.map(process_message)
.collect::<Vec<_>>()
.await

Trait-based abstract interface

All SeaStreamer backends implement a common abstract interface, offering you a familiar API. Below is a basic Redis consumer, which is nearly the same as the previous example:

#[tokio::main]
async fn main() -> Result<()> {
env_logger::init();

let stream: StreamUrl = "redis://localhost:6379/my_stream".parse()?;
let streamer = RedisStreamer::connect(stream.streamer(), Default::default()).await?;
let mut options = RedisConsumerOptions::new(ConsumerMode::RealTime);
options.set_auto_stream_reset(AutoStreamReset::Earliest);
let consumer = streamer
.create_consumer(stream.stream_keys(), options)
.await?;

loop {
let mess = consumer.next().await?;
println!("{}", mess.message().as_str()?);
}
}

Redis Streams Support

SeaStreamer Redis provides a Kafka-like stream semantics:

  • Non-group streaming with AutoStreamReset option
  • Consumer-group-based streaming with auto-ack and/or auto-commit
  • Load balancing among consumers with automatic failover
  • Seek/rewind to point in time

You don't have to call XADD, XREAD, XACK, etc... anymore!

Enum-based generic interface

The trait-based API requires you to designate the concrete Streamer type for monomorphization, otherwise the code cannot compile.

Akin to how SeaORM implements runtime-polymorphism, SeaStreamer provides a enum-based generic streamer, in which the backend is selected on runtime.

Here is an illustration (full example):

// sea-streamer-socket
pub struct SeaConsumer {
backend: SeaConsumerBackend,
}

enum SeaConsumerBackend {
#[cfg(feature = "backend-kafka")]
Kafka(KafkaConsumer),
#[cfg(feature = "backend-redis")]
Redis(RedisConsumer),
#[cfg(feature = "backend-stdio")]
Stdio(StdioConsumer),
}

// Your code
let uri: StreamerUri = "kafka://localhost:9092".parse()?; // or
let uri: StreamerUri = "redis://localhost:6379".parse()?; // or
let uri: StreamerUri = "stdio://".parse()?;

// SeaStreamer will be backed by Kafka, Redis or Stdio depending on the URI
let streamer = SeaStreamer::connect(uri, Default::default()).await?;

// Set backend-specific options
let mut options = SeaConsumerOptions::new(ConsumerMode::Resumable);
options.set_kafka_consumer_options(|options: &mut KafkaConsumerOptions| { .. });
options.set_redis_consumer_options(|options: &mut RedisConsumerOptions| { .. });
let mut consumer: SeaConsumer = streamer.create_consumer(stream_keys, options).await?;

// You can still retrieve the concrete type
let kafka: Option<&mut KafkaConsumer> = consumer.get_kafka();
let redis: Option<&mut RedisConsumer> = consumer.get_redis();

So you can "write once, stream anywhere"!

Good old unix pipe

In SeaStreamer, stdin & stdout can be used as stream source and sink.

Say you are developing some processors to transform a stream in several stages:

./processor_1 --input kafka://localhost:9092/input --output kafka://localhost:9092/stage_1 &
./processor_2 --input kafka://localhost:9092/stage_1 --output kafka://localhost:9092/stage_2 &
./processor_3 --input kafka://localhost:9092/stage_2 --output kafka://localhost:9092/output &

It would be great if we can simply pipe the processors together right?

With SeaStreamer, you can do the following:

./processor_1 --input kafka://localhost:9092/input --output stdio:///stream |
./processor_2 --input stdio:///stream --output stdio:///stream |
./processor_3 --input stdio:///stream --output kafka://localhost:9092/output

All without recompiling the stream processors! Now, you can develop locally with the comfort of using |, >, < and your favourite unix program in the shell.

Testable

SeaStreamer encourages you to write tests at all levels:

  • You can execute tests involving several stream processors in the same OS process
  • You can execute tests involving several OS processes by connecting them with pipes
  • You can execute tests involving several stream processors with Redis / Kafka

All against the same piece of code! Let SeaStreamer take away the boilerplate and mocking facility from your codebase.

Below is an example of intra-process testing, which can be run with cargo test without any dependency or side-effects:

let stream = StreamKey::new("test")?;
let mut options = StdioConnectOptions::default();
options.set_loopback(true); // messages produced will be feed back to consumers
let streamer = StdioStreamer::connect(StreamerUri::zero(), options).await?;
let producer = streamer.create_producer(stream.clone(), Default::default()).await?;
let mut consumer = streamer.create_consumer(&[stream.clone()], Default::default()).await?;

for i in 0..5 {
let mess = format!("{}", i);
producer.send(mess)?;
}

let seq = collect(&mut consumer, 5).await;
assert_eq!(seq, [0, 1, 2, 3, 4]);

Getting started

If you are eager to get started with SeaStreamer, you can checkout our set of examples:

  • consumer: A basic consumer
  • producer: A basic producer
  • processor: A basic stream processor
  • resumable: A resumable stream processor that continues from where it left off
  • buffered: An advanced stream processor with internal buffering and batch processing
  • blocking: An advanced stream processor for handling blocking / CPU-bound tasks

Read the official documentation to learn more.

Roadmap

A few major components we plan to develop:

  • File Backend
  • Redis Cluster

We welcome you to join our Discussions if you have thoughts or ideas!

People

SeaStreamer is designed and developed by the same mind who brought you SeaORM:

Chris Tsang

Community

SeaQL.org is an independent open-source organization run by passionate ️developers. If you like our projects, please star ⭐ and share our repositories. If you feel generous, a small donation via GitHub Sponsor will be greatly appreciated, and goes a long way towards sustaining the organization 🚢.

SeaStreamer is a community driven project. We welcome you to participate, contribute and together build for Rust's future 🦀.


  1. except sea-streamer-stdio, but only contends on consumer add/drop

· 10 min read
SeaQL Team

🎉 We are pleased to release SeaORM 0.11.0!

Data Loader

[#1443, #1238] The LoaderTrait provides an API to load related entities in batches.

Consider this one to many relation:

let cake_with_fruits: Vec<(cake::Model, Vec<fruit::Model>)> = Cake::find()
.find_with_related(Fruit)
.all(db)
.await?;

The generated SQL is:

SELECT
"cake"."id" AS "A_id",
"cake"."name" AS "A_name",
"fruit"."id" AS "B_id",
"fruit"."name" AS "B_name",
"fruit"."cake_id" AS "B_cake_id"
FROM "cake"
LEFT JOIN "fruit" ON "cake"."id" = "fruit"."cake_id"
ORDER BY "cake"."id" ASC

The 1 side's (Cake) data will be duplicated. If N is a large number, this would results in more data being transferred over the wire. Using the Loader would ensure each model is transferred only once.

The following loads the same data as above, but with two queries:

let cakes: Vec<cake::Model> = Cake::find().all(db).await?;
let fruits: Vec<Vec<fruit::Model>> = cakes.load_many(Fruit, db).await?;

for (cake, fruits) in cakes.into_iter().zip(fruits.into_iter()) { .. }
SELECT "cake"."id", "cake"."name" FROM "cake"
SELECT "fruit"."id", "fruit"."name", "fruit"."cake_id" FROM "fruit" WHERE "fruit"."cake_id" IN (..)

You can even apply filters on the related entity:

let fruits_in_stock: Vec<Vec<fruit::Model>> = cakes.load_many(
fruit::Entity::find().filter(fruit::Column::Stock.gt(0i32))
db
).await?;
SELECT "fruit"."id", "fruit"."name", "fruit"."cake_id" FROM "fruit"
WHERE "fruit"."stock" > 0 AND "fruit"."cake_id" IN (..)

To learn more, read the relation docs.

Transaction Isolation Level and Access Mode

[#1230] The transaction_with_config and begin_with_config allows you to specify the IsolationLevel and AccessMode.

For now, they are only implemented for MySQL and Postgres. In order to align their semantic difference, MySQL will execute SET TRANSACTION commands before begin transaction, while Postgres will execute SET TRANSACTION commands after begin transaction.

db.transaction_with_config::<_, _, DbErr>(
|txn| { ... },
Some(IsolationLevel::ReadCommitted),
Some(AccessMode::ReadOnly),
)
.await?;

let transaction = db
.begin_with_config(IsolationLevel::ReadCommitted, AccessMode::ReadOnly)
.await?;

To learn more, read the transaction docs.

Cast Column Type on Select and Save

[#1304] If you need to select a column as one type but save it into the database as another, you can specify the select_as and the save_as attributes to perform the casting. A typical use case is selecting a column of type citext (case-insensitive text) as String in Rust and saving it into the database as citext. One should define the model field as below:

#[derive(Clone, Debug, PartialEq, Eq, DeriveEntityModel)]
#[sea_orm(table_name = "ci_table")]
pub struct Model {
#[sea_orm(primary_key)]
pub id: i32,
#[sea_orm(select_as = "text", save_as = "citext")]
pub case_insensitive_text: String
}

#[derive(Copy, Clone, Debug, EnumIter, DeriveRelation)]
pub enum Relation {}

impl ActiveModelBehavior for ActiveModel {}

Changes to ActiveModelBehavior

[#1328, #1145] The methods of ActiveModelBehavior now have Connection as an additional parameter. It enables you to perform database operations, for example, logging the changes made to the existing model or validating the data before inserting it.

#[async_trait]
impl ActiveModelBehavior for ActiveModel {
/// Create a new ActiveModel with default values. Also used by `Default::default()`.
fn new() -> Self {
Self {
uuid: Set(Uuid::new_v4()),
..ActiveModelTrait::default()
}
}

/// Will be triggered before insert / update
async fn before_save<C>(self, db: &C, insert: bool) -> Result<Self, DbErr>
where
C: ConnectionTrait,
{
// Logging changes
edit_log::ActiveModel {
action: Set("before_save".into()),
values: Set(serde_json::json!(model)),
..Default::default()
}
.insert(db)
.await?;

Ok(self)
}
}

To learn more, read the entity docs.

Execute Unprepared SQL Statement

[#1327] You can execute an unprepared SQL statement with ConnectionTrait::execute_unprepared.

// Use `execute_unprepared` if the SQL statement doesn't have value bindings
db.execute_unprepared(
"CREATE TABLE `cake` (
`id` int NOT NULL AUTO_INCREMENT PRIMARY KEY,
`name` varchar(255) NOT NULL
)"
)
.await?;

// Construct a `Statement` if the SQL contains value bindings
let stmt = Statement::from_sql_and_values(
manager.get_database_backend(),
r#"INSERT INTO `cake` (`name`) VALUES (?)"#,
["Cheese Cake".into()]
);
db.execute(stmt).await?;

Select Into Tuple

[#1311] You can select a tuple (or single value) with the into_tuple method.

let res: Vec<(String, i64)> = cake::Entity::find()
.select_only()
.column(cake::Column::Name)
.column(cake::Column::Id.count())
.group_by(cake::Column::Name)
.into_tuple()
.all(&db)
.await?;

Atomic Migration

[#1379] Migration will be executed in Postgres atomically that means migration scripts will be executed inside a transaction. Changes done to the database will be rolled back if the migration failed. However, atomic migration is not supported in MySQL and SQLite.

You can start a transaction inside each migration to perform operations like seeding sample data for a newly created table.

Types Support

  • [#1325] Support various UUID formats that are available in uuid::fmt module
#[derive(Clone, Debug, PartialEq, Eq, DeriveEntityModel)]
#[sea_orm(table_name = "uuid_fmt")]
pub struct Model {
#[sea_orm(primary_key)]
pub id: i32,
pub uuid: Uuid,
pub uuid_braced: uuid::fmt::Braced,
pub uuid_hyphenated: uuid::fmt::Hyphenated,
pub uuid_simple: uuid::fmt::Simple,
pub uuid_urn: uuid::fmt::Urn,
}
  • [#1210] Support vector of enum for Postgres
#[derive(Debug, Clone, PartialEq, Eq, EnumIter, DeriveActiveEnum)]
#[sea_orm(rs_type = "String", db_type = "Enum", enum_name = "tea")]
pub enum Tea {
#[sea_orm(string_value = "EverydayTea")]
EverydayTea,
#[sea_orm(string_value = "BreakfastTea")]
BreakfastTea,
}

#[derive(Clone, Debug, PartialEq, Eq, DeriveEntityModel)]
#[sea_orm(table_name = "enum_vec")]
pub struct Model {
#[sea_orm(primary_key)]
pub id: i32,
pub teas: Vec<Tea>,
pub teas_opt: Option<Vec<Tea>>,
}
  • [#1414] Support ActiveEnum field as primary key
#[derive(Clone, Debug, PartialEq, Eq, DeriveEntityModel)]
#[sea_orm(table_name = "enum_primary_key")]
pub struct Model {
#[sea_orm(primary_key, auto_increment = false)]
pub id: Tea,
pub category: Option<Category>,
pub color: Option<Color>,
}

Opt-in Unstable Internal APIs

By enabling sea-orm-internal feature you opt-in unstable internal APIs including:

Breaking Changes

  • [#1366] sea-query has been upgraded to 0.28.x, which comes with some improvements and breaking changes. Please follow the release notes for more details

  • [#1420] sea-orm-cli: generate entity command enable --universal-time flag by default

  • [#1425] Added RecordNotInserted and RecordNotUpdated to DbErr

  • [#1327] Added ConnectionTrait::execute_unprepared method

  • [#1311] The required method of TryGetable changed:

// then
fn try_get(res: &QueryResult, pre: &str, col: &str) -> Result<Self, TryGetError>;
// now; ColIdx can be `&str` or `usize`
fn try_get_by<I: ColIdx>(res: &QueryResult, index: I) -> Result<Self, TryGetError>;

So if you implemented it yourself:

impl TryGetable for XXX {
- fn try_get(res: &QueryResult, pre: &str, col: &str) -> Result<Self, TryGetError> {
+ fn try_get_by<I: sea_orm::ColIdx>(res: &QueryResult, idx: I) -> Result<Self, TryGetError> {
- let value: YYY = res.try_get(pre, col).map_err(TryGetError::DbErr)?;
+ let value: YYY = res.try_get_by(idx).map_err(TryGetError::DbErr)?;
..
}
}
  • [#1328] The ActiveModelBehavior trait becomes async trait. If you overridden the default ActiveModelBehavior implementation:
#[async_trait::async_trait]
impl ActiveModelBehavior for ActiveModel {
async fn before_save<C>(self, db: &C, insert: bool) -> Result<Self, DbErr>
where
C: ConnectionTrait,
{
// ...
}

// ...
}
  • [#1425] DbErr::RecordNotFound("None of the database rows are affected") is moved to a dedicated error variant DbErr::RecordNotUpdated
let res = Update::one(cake::ActiveModel {
name: Set("Cheese Cake".to_owned()),
..model.into_active_model()
})
.exec(&db)
.await;

// then
assert_eq!(
res,
Err(DbErr::RecordNotFound(
"None of the database rows are affected".to_owned()
))
);

// now
assert_eq!(res, Err(DbErr::RecordNotUpdated));
  • [#1395] sea_orm::ColumnType was replaced by sea_query::ColumnType
    • Method ColumnType::def was moved to ColumnTypeTrait
    • ColumnType::Binary becomes a tuple variant which takes in additional option sea_query::BlobSize
    • ColumnType::Custom takes a sea_query::DynIden instead of String and thus a new method custom is added (note the lowercase)
// Compact Entity
#[derive(Clone, Debug, PartialEq, Eq, DeriveEntityModel)]
#[sea_orm(table_name = "fruit")]
pub struct Model {
- #[sea_orm(column_type = r#"Custom("citext".to_owned())"#)]
+ #[sea_orm(column_type = r#"custom("citext")"#)]
pub column: String,
}
// Expanded Entity
impl ColumnTrait for Column {
type EntityName = Entity;

fn def(&self) -> ColumnDef {
match self {
- Self::Column => ColumnType::Custom("citext".to_owned()).def(),
+ Self::Column => ColumnType::custom("citext").def(),
}
}
}

SeaORM Enhancements

  • [#1256] Refactor schema module to expose functions for database alteration
  • [#1346] Generate compact entity with #[sea_orm(column_type = "JsonBinary")] macro attribute
  • MockDatabase::append_exec_results(), MockDatabase::append_query_results(), MockDatabase::append_exec_errors() and MockDatabase::append_query_errors() [#1367] take any types implemented IntoIterator trait
  • [#1362] find_by_id and delete_by_id take any Into primary key value
  • [#1410] QuerySelect::offset and QuerySelect::limit takes in Into<Option<u64>> where None would reset them
  • [#1236] Added DatabaseConnection::close
  • [#1381] Added is_null getter for ColumnDef
  • [#1177] Added ActiveValue::reset to convert Unchanged into Set
  • [#1415] Added QueryTrait::apply_if to optionally apply a filter
  • Added the sea-orm-internal feature flag to expose some SQLx types
    • [#1297] Added DatabaseConnection::get_*_connection_pool() for accessing the inner SQLx connection pool
    • [#1434] Re-exporting SQLx errors

CLI Enhancements

  • [#846, #1186, #1318] Generate #[serde(skip_deserializing)] for primary key columns
  • [#1171, #1320] Generate #[serde(skip)] for hidden columns
  • [#1124, #1321] Generate entity with extra derives and attributes for model struct

Integration Examples

SeaORM plays well with the other crates in the async ecosystem. We maintain an array of example projects for building REST, GraphQL and gRPC services. More examples wanted!

Our GitHub Sponsor profile is up! SeaQL.org is an independent open-source organization run by passionate developers. If you enjoy using SeaORM, please star and share our repositories. If you feel generous, a small donation will be greatly appreciated, and goes a long way towards sustaining the project.

A big shout out to our sponsors 😇:

Afonso Barracha
Émile Fugulin
Dean Sheather
Shane Sveller
Sakti Dwi Cahyono
Nick Price
Roland Gorácz
Henrik Giesel
Jacob Trueb
Naoki Ikeguchi
Manfred Lee
Marcus Buffett
efrain2007

What's Next?

SeaQL is a community driven project. We welcome you to participate, contribute and build together for Rust's future.

Here is the roadmap for SeaORM 0.12.x.

· 2 min read
Chris Tsang

FAQ.02 Why the empty enum Relation {} is needed even if an Entity has no relations?

Consider the following example Post Entity:

use sea_orm::entity::prelude::*;

#[derive(Clone, Debug, PartialEq, Eq, DeriveEntityModel)]
#[sea_orm(table_name = "posts")]
pub struct Model {
#[sea_orm(primary_key)]
pub id: i32,
pub title: String,
pub text: String,
}

#[derive(Copy, Clone, Debug, EnumIter, DeriveRelation)]
pub enum Relation {}

impl ActiveModelBehavior for ActiveModel {}

The two lines for defining Relation is quite unnecessary right?

To explain the problem, let's dive slightly deeper into the macro-expanded entity:

The DeriveRelation macro simply implements the RelationTrait:

impl RelationTrait for Relation {
fn def(&self) -> RelationDef {
match self {
_ => unreachable!()
}
}
}

Which in turn is needed by EntityTrait as an associated type:

impl EntityTrait for Entity {
type Relation = Relation;
...
}

It would be ideal if, when the user does not specify this associated type, the library automatically fills in a stub to satisfy the type system?

Turns out, there is such a feature in Rust! It is an unstable feature called associated_type_defaults.

Basically, it allows trait definitions to specify a default associated type, allowing it to be elided:

// only compiles in nightly
trait EntityTrait {
type Relation: Relation = EmptyRelation;
}

Due to our commitment to stable Rust, this may not land in SeaORM very soon. When it is stabilized, do remind us to implement this feature to get rid of those two lines!

· 4 min read
SeaQL Team

SeaQL.org offer internships tailored to university students. In fact, it will be the 3rd cohort in 2023.

The internships normally take place during summer and winter semester breaks. During the internship period, you will work on a project dedicatedly and publish the project’s outcome at the end.

The striking aspect of our mode of operation is it covers the entire lifecycle of software development, from Design ➡️ Implementation ➡️ Testing ➡️ Delivery. You will be amazed of how much you can achieve in such a short period of time!

To date StarfishQL ✴️ and Seaography 🧭 are great projects our team has created in the past year. We pride ourselves on careful planning, consistent execution, and pragmatic approach in software engineering. I spend a huge amount of time on idea evaluation: if the scope of the project is too small, it will be uninteresting, but if it is too large, it will fail to be delivered.

Fellow undergraduates, here are a few good reasons why you should participate in internships at an open-source organization like SeaQL:

  1. A tangible showcase on CV: open-source work is published, inspectable and has real-world impact. We will also ensure that it has good branding, graphics, and visibility.
  2. Not driven by a business process, we do not compromise on quality of work. We do not have a proprietary development process, so it’s all open-source tools with transferable skills.
  3. You will contribute to the community and will interact with people across the world. Collaboration on open source is the best thing humanity ever invented. You will only believe me when you have experienced it first-hand.
  4. Because you are the driver of the project you work on, it allows you to uncover something more about yourself, in particular - abilities and discipline: you always have had under/over-estimated yourself in one aspect or another.

Here are several things you are going to learn:

  1. "Thinking > Programming": the more time you spend on thinking beforehand, the better the code you are going to write. And the more time you spend on reviewing afterwards, the better the code is going to be.
  2. How to organize a codebase. Make good use of the Rust type system to craft a modular, testable codebase.
  3. Test automation. Every SeaQL project is continuously tested, and this is an integral part of our engineering process.
  4. Documentation. Our software aims to provide good documentation that is comprehensive, easy to follow, and fun to read.
  5. Performance tuning. Depending on the project, we might do some benchmarking and optimization. But in general we put you off from writing code that creates unnecessary overhead.

We were a mentor organization in GSoC 2022 and may be in 2023 (update: we were not accepted into GSoC 2023). We also offer internships outside of GSoC. So, what are the requirements when you become a contributor?

  1. Be passionate. You must show your passion in open-source and software engineering, so a good GitHub profile with some participation is needed.
  2. Be dedicated. This is a full-time job. While being fully remote and flexible on hours, you must have no other commitment or duties during the stipulated internship period.
  3. Be open-minded. You should listen carefully to your mentors and act on their advice accordingly.
  4. Write more. Communicate your thoughts and progress on all channels in an organized manner.

Don’t just listen to me though. Here is what our past interns says:

Be well-prepared for your upcoming career in this technology industry! Follow us on GitHub and Twitter now, and stay tuned for future announcements.

· 4 min read
Chris Tsang

We are calling for contributors and reviewers for SeaQL projects 📢!

The SeaQL userbase has been steadily growing in the past year, and it’s a pleasure for us to have helped individuals and start-ups to build their projects in Rust. However, the volume of questions, issues and pull requests is nearly saturating our core members’ capacity.

But again, thank you everyone for participating in the community!

If your project depends on SeaQL and you want to help us, here are some suggestions (if you have not already, star all our repositories and follow us on Twitter):

  1. Financial Contribution. You can sponsor us on GitHub and those will be used to cover our expenses. As a courtesy, we listen to our sponsors for their needs and use cases, and we also communicate our organizational development from time-to-time.
  2. Code Contribution. Opening a PR with us is always appreciated! To get started, you can go through our issue trackers and pick one to handle. If you are thinking of developing a substantial feature, start with drafting a "Proposal & Implementation Plan" (PIP).
  3. Knowledge Contribution. There are various formats of knowledge sharing: tutorial, cookbook, QnA and Discord. You can open PRs to our documentation repositories or publish on your own. We will be happy to list it in our learning resources section. Keep an eye on our GitHub Discussions and Discord and help others where you can!
  4. Code Review. This is an important process of our engineering. Right now, only 3 of our core members serve as reviewers. Non-core members can also become reviewers and I invite you to become one!

Now, I’d like to outline our review policy: for maturing projects, each PR merged has to be approved by at least two reviewers and one of them must be a core member; self-review allowed. Here are some examples:

  • A core member opened a PR, another core member approved ✅
  • A core member opened a PR, a reviewer approved ✅
  • A reviewer opened a PR, a core member approved ✅
  • A reviewer opened a PR, another reviewer approved ⛔
  • A contributor opened a PR, 2 core members approved ✅
  • A contributor opened a PR, a core member and a reviewer approved ✅
  • A contributor opened a PR, 2 reviewers approved ⛔

In a nutshell, at least two pairs of trusted eyes should have gone through each PR.

What are the criteria when reviewing a PR?

The following questions should all be answered yes.

  1. Implementation, documentation and tests
    1. Is the implementation easy to follow (have meaningful variable and function names)?
    2. Is there sufficient document to the API?
    3. Are there adequate tests covering various cases?
  2. API design
    1. Is the API self-documenting so users can understand its use easily?
    2. Is the API style consistent with our existing API?
    3. Does the API made reasonable use of the type system to enforce constraints?
    4. Are the failure paths and error messages clear?
    5. Are all breaking changes justified and documented?
  3. Functionality
    1. Does the feature make sense in computer science terms?
    2. Does the feature actually work with all our supported backends?
    3. Are all caveats discussed and eliminated / documented?
  4. Architecture
    1. Does it fit with the existing architecture of our codebase?
    2. Is it not going to create technical debt / maintenance burden?
    3. Does it not break abstraction?

1, 2 & 3 are fairly objective and factual, however the answers to 4 probably require some discussion and debate. If a consensus cannot be made, @tyt2y3 will make the final verdict.

Who are the current reviewers?

As of today, SeaQL has 3 core members who are also reviewers:

Chris Tsang
Founder. Maintains all projects.
Billy Chan
Founding member. Co-maintainer of SeaORM and Seaography.
Ivan Krivosheev
Joined in 2022. Co-maintainer of SeaQuery.

How to become a reviewer?

We are going to invite a few contributors we worked closely with, but you can also volunteer – the requirement is: you have made substantial code contribution to our projects, and has shown familiarity with our engineering practices.

Over time, when you have made significant contribution to our organization, you can also become a core member.

Let’s build for Rust's future together 🦀

· 4 min read
SeaQL Team

🎉 We are pleased to release SeaQuery 0.28.0! Here are some feature highlights 🌟:

New IdenStatic trait for static identifier

[#508] Representing a identifier with &'static str. The IdenStatic trait looks like this:

pub trait IdenStatic: Iden + Copy + 'static {
fn as_str(&self) -> &'static str;
}

You can derive it easily for your existing Iden. Just changing the #[derive(Iden)] into #[derive(IdenStatic)].

#[derive(IdenStatic)]
enum User {
Table,
Id,
FirstName,
LastName,
#[iden = "_email"]
Email,
}

assert_eq!(User::Email.as_str(), "_email");

New PgExpr and SqliteExpr traits for backend specific expressions

[#519] Postgres specific and SQLite specific expressions are being moved into its corresponding trait. You need to import the trait into scope before construct the expression with those backend specific methods.

// Importing `PgExpr` trait before constructing Postgres expression
use sea_query::{extension::postgres::PgExpr, tests_cfg::*, *};

let query = Query::select()
.columns([Font::Name, Font::Variant, Font::Language])
.from(Font::Table)
.and_where(Expr::val("a").concatenate("b").concat("c").concat("d"))
.to_owned();

assert_eq!(
query.to_string(PostgresQueryBuilder),
r#"SELECT "name", "variant", "language" FROM "font" WHERE 'a' || 'b' || 'c' || 'd'"#
);
// Importing `SqliteExpr` trait before constructing SQLite expression
use sea_query::{extension::sqlite::SqliteExpr, tests_cfg::*, *};

let query = Query::select()
.column(Font::Name)
.from(Font::Table)
.and_where(Expr::col(Font::Name).matches("a"))
.to_owned();

assert_eq!(
query.to_string(SqliteQueryBuilder),
r#"SELECT "name" FROM "font" WHERE "name" MATCH 'a'"#
);

Bug Fixes

// given
let (statement, values) = sea_query::Query::select()
.column(Glyph::Id)
.from(Glyph::Table)
.cond_where(Cond::any()
.add(Cond::all()) // empty all() => TRUE
.add(Cond::any()) // empty any() => FALSE
)
.build(sea_query::MysqlQueryBuilder);

// old behavior
assert_eq!(statement, r#"SELECT `id` FROM `glyph`"#);

// new behavior
assert_eq!(
statement,
r#"SELECT `id` FROM `glyph` WHERE (TRUE) OR (FALSE)"#
);

// a complex example
let (statement, values) = Query::select()
.column(Glyph::Id)
.from(Glyph::Table)
.cond_where(
Cond::all()
.add(Cond::all().not())
.add(Cond::any().not())
.not(),
)
.build(MysqlQueryBuilder);

assert_eq!(
statement,
r#"SELECT `id` FROM `glyph` WHERE NOT ((NOT TRUE) AND (NOT FALSE))"#
);

Breaking Changes

  • [#535] MSRV is up to 1.62
# Make sure you're running SeaQuery with Rust 1.62+ 🦀
$ rustup update
  • [#492] ColumnType::Array definition changed from Array(SeaRc<Box<ColumnType>>) to Array(SeaRc<ColumnType>)
  • [#475] Func::* now returns FunctionCall instead of SimpleExpr
  • [#475] Func::coalesce now accepts IntoIterator<Item = SimpleExpr> instead of IntoIterator<Item = Into<SimpleExpr>
  • [#475] Removed Expr::arg and Expr::args - these functions are no longer needed
  • [#507] Moved all Postgres specific operators to PgBinOper
  • [#476] Expr methods used to accepts Into<Value> now accepts Into<SimpleExpr>
  • [#476] Expr::is_in, Expr::is_not_in now accepts Into<SimpleExpr> instead of Into<Value> and convert it to SimpleExpr::Tuple instead of SimpleExpr::Values
  • [#475] Expr::expr now accepts Into<SimpleExpr> instead of SimpleExpr
  • [#519] Moved Postgres specific Expr methods to new trait PgExpr
  • [#528] Expr::equals now accepts C: IntoColumnRef instead of T: IntoIden, C: IntoIden
use sea_query::{*, tests_cfg::*};

let query = Query::select()
.columns([Char::Character, Char::SizeW, Char::SizeH])
.from(Char::Table)
.and_where(
Expr::col((Char::Table, Char::FontId))
- .equals(Font::Table, Font::Id)
+ .equals((Font::Table, Font::Id))
)
.to_owned();

assert_eq!(
query.to_string(MysqlQueryBuilder),
r#"SELECT `character`, `size_w`, `size_h` FROM `character` WHERE `character`.`font_id` = `font`.`id`"#
);
  • [#525] Removed integer and date time column types' display length / precision option

API Additions

  • [#475] Added SelectStatement::from_function
use sea_query::{tests_cfg::*, *};

let query = Query::select()
.column(ColumnRef::Asterisk)
.from_function(Func::random(), Alias::new("func"))
.to_owned();

assert_eq!(
query.to_string(MysqlQueryBuilder),
r#"SELECT * FROM RAND() AS `func`"#
);
  • [#486] Added binary operators from the Postgres pg_trgm extension
use sea_query::extension::postgres::PgBinOper;

assert_eq!(
Query::select()
.expr(Expr::col(Font::Name).binary(PgBinOper::WordSimilarity, Expr::value("serif")))
.from(Font::Table)
.to_string(PostgresQueryBuilder),
r#"SELECT "name" <% 'serif' FROM "font""#
);
  • [#473] Added ILIKE and NOT ILIKE operators
  • [#510] Added the mul and div methods for SimpleExpr
  • [#513] Added the MATCH, -> and ->> operators for SQLite
use sea_query::extension::sqlite::SqliteBinOper;

assert_eq!(
Query::select()
.column(Char::Character)
.from(Char::Table)
.and_where(Expr::col(Char::Character).binary(SqliteBinOper::Match, Expr::val("test")))
.build(SqliteQueryBuilder),
(
r#"SELECT "character" FROM "character" WHERE "character" MATCH ?"#.to_owned(),
Values(vec!["test".into()])
)
);
  • [#497] Added the FULL OUTER JOIN
  • [#530] Added PgFunc::get_random_uuid
  • [#528] Added SimpleExpr::eq, SimpleExpr::ne, Expr::not_equals
  • [#529] Added PgFunc::starts_with
  • [#535] Added Expr::custom_keyword and SimpleExpr::not
use sea_query::*;

let query = Query::select()
.expr(Expr::custom_keyword(Alias::new("test")))
.to_owned();

assert_eq!(query.to_string(MysqlQueryBuilder), r#"SELECT test"#);
assert_eq!(query.to_string(PostgresQueryBuilder), r#"SELECT test"#);
assert_eq!(query.to_string(SqliteQueryBuilder), r#"SELECT test"#);
  • [#539] Added SimpleExpr::like, SimpleExpr::not_like and Expr::cast_as
  • [#532] Added support for NULLS NOT DISTINCT clause for Postgres
  • [#531] Added Expr::cust_with_expr and Expr::cust_with_exprs
use sea_query::{tests_cfg::*, *};

let query = Query::select()
.expr(Expr::cust_with_expr("data @? ($1::JSONPATH)", "hello"))
.to_owned();

assert_eq!(
query.to_string(PostgresQueryBuilder),
r#"SELECT data @? ('hello'::JSONPATH)"#
);
  • [#538] Added support for converting &String to Value

Miscellaneous Enhancements

  • [#475] New struct FunctionCall which hold function and arguments
  • [#503] Support BigDecimal, IpNetwork and MacAddress for sea-query-postgres
  • [#511] Made value::with_array module public and therefore making NotU8 trait public
  • [#524] Drop the Sized requirement on implementers of SchemaBuilders

Integration Examples

SeaQuery plays well with the other crates in the rust ecosystem.

Community

SeaQL is a community driven project. We welcome you to participate, contribute and together build for Rust's future.

· 4 min read
SeaQL Team

🎉 We are pleased to release Seaography 0.3.0! Here are some feature highlights 🌟:

Dependency Upgrade

[#93] We have upgraded a major dependency:

You might need to upgrade the corresponding dependency in your application as well.

Support Self Referencing Relation

[#99] You can now query self referencing models and the inverse of it.

Self referencing relation should be added to the Relation enum, note that the belongs_to attribute must be belongs_to = "Entity".

use sea_orm::entity::prelude::*;

#[derive(
Clone, Debug, PartialEq, DeriveEntityModel,
async_graphql::SimpleObject, seaography::macros::Filter,
)]
#[sea_orm(table_name = "staff")]
#[graphql(complex)]
#[graphql(name = "Staff")]
pub struct Model {
#[sea_orm(primary_key)]
pub staff_id: i32,
pub first_name: String,
pub last_name: String,
pub reports_to_id: Option<i32>,
}

#[derive(
Copy, Clone, Debug, EnumIter, DeriveRelation,
seaography::macros::RelationsCompact
)]
pub enum Relation {
#[sea_orm(
belongs_to = "Entity",
from = "Column::ReportsToId",
to = "Column::StaffId",
)]
SelfRef,
}

impl ActiveModelBehavior for ActiveModel {}

Then, you can query the related models in GraphQL.

{
staff {
nodes {
firstName
reportsToId
selfRefReverse {
staffId
firstName
}
selfRef {
staffId
firstName
}
}
}
}

The resulting JSON

{
"staff": {
"nodes": [
{
"firstName": "Mike",
"reportsToId": null,
"selfRefReverse": [
{
"staffId": 2,
"firstName": "Jon"
}
],
"selfRef": null
},
{
"firstName": "Jon",
"reportsToId": 1,
"selfRefReverse": null,
"selfRef": {
"staffId": 1,
"firstName": "Mike"
}
}
]
}
}

Web Framework Generator

[#74] You can generate seaography project with either Actix or Poem as the web server.

CLI Generator Option

Run seaography-cli to generate seaography code with Actix or Poem as the web framework.

# The command take three arguments, generating project with Poem web framework by default
seaography-cli <DATABASE_URL> <CRATE_NAME> <DESTINATION>

# Generating project with Actix web framework
seaography-cli -f actix <DATABASE_URL> <CRATE_NAME> <DESTINATION>

# MySQL
seaography-cli mysql://root:root@localhost/sakila seaography-mysql-example examples/mysql
# PostgreSQL
seaography-cli postgres://root:root@localhost/sakila seaography-postgres-example examples/postgres
# SQLite
seaography-cli sqlite://examples/sqlite/sakila.db seaography-sqlite-example examples/sqliteql

Actix

use async_graphql::{
dataloader::DataLoader,
http::{playground_source, GraphQLPlaygroundConfig},
EmptyMutation, EmptySubscription, Schema,
};
use async_graphql_actix_web::{GraphQLRequest, GraphQLResponse};
use sea_orm::Database;
use seaography_example_project::*;
// ...

async fn graphql_playground() -> Result<HttpResponse> {
Ok(HttpResponse::Ok()
.content_type("text/html; charset=utf-8")
.body(
playground_source(GraphQLPlaygroundConfig::new("http://localhost:8000"))
))
}

#[actix_web::main]
async fn main() -> std::io::Result<()> {
// ...

let database = Database::connect(db_url).await.unwrap();
let orm_dataloader: DataLoader<OrmDataloader> = DataLoader::new(
OrmDataloader {
db: database.clone(),
},
tokio::spawn,
);

let schema = Schema::build(QueryRoot, EmptyMutation, EmptySubscription)
.data(database)
.data(orm_dataloader)
.finish();

let app = App::new()
.app_data(Data::new(schema.clone()))
.service(web::resource("/").guard(guard::Post()).to(index))
.service(web::resource("/").guard(guard::Get()).to(graphql_playground));

HttpServer::new(app)
.bind("127.0.0.1:8000")?
.run()
.await
}

Poem

use async_graphql::{
dataloader::DataLoader,
http::{playground_source, GraphQLPlaygroundConfig},
EmptyMutation, EmptySubscription, Schema,
};
use async_graphql_poem::GraphQL;
use poem::{handler, listener::TcpListener, web::Html, IntoResponse, Route, Server};
use sea_orm::Database;
use seaography_example_project::*;
// ...

#[handler]
async fn graphql_playground() -> impl IntoResponse {
Html(playground_source(GraphQLPlaygroundConfig::new("/")))
}

#[tokio::main]
async fn main() {
// ...

let database = Database::connect(db_url).await.unwrap();
let orm_dataloader: DataLoader<OrmDataloader> = DataLoader::new(
OrmDataloader { db: database.clone() },
tokio::spawn,
);

let schema = Schema::build(QueryRoot, EmptyMutation, EmptySubscription)
.data(database)
.data(orm_dataloader)
.finish();

let app = Route::new()
.at("/", get(graphql_playground)
.post(GraphQL::new(schema)));

Server::new(TcpListener::bind("0.0.0.0:8000"))
.run(app)
.await
.unwrap();
}

[#84] Filtering, sorting and paginating related 1-to-many queries. Note that the pagination is work-in-progress, currently it is in memory pagination.

For example, find all inactive customers, include their address, and their payments with amount greater than 7 ordered by amount the second result. You can execute the query below at our GraphQL playground.

{
customer(
filters: { active: { eq: 0 } }
pagination: { cursor: { limit: 3, cursor: "Int[3]:271" } }
) {
nodes {
customerId
lastName
email
address {
address
}
payment(
filters: { amount: { gt: "7" } }
orderBy: { amount: ASC }
pagination: { pages: { limit: 1, page: 1 } }
) {
nodes {
paymentId
amount
}
pages
current
pageInfo {
hasPreviousPage
hasNextPage
}
}
}
pageInfo {
hasPreviousPage
hasNextPage
endCursor
}
}
}

Integration Examples

We have the following examples for you, alongside with the SQL scripts to initialize the database.

Community

SeaQL is a community driven project. We welcome you to participate, contribute and together build for Rust's future.

· 7 min read
SeaQL Team

🎉 We are pleased to release SeaORM 0.10.0!

Rust 1.65

The long-anticipated Rust 1.65 has been released! Generic associated types (GATs) must be the hottest newly-stabilized feature.

How is GAT useful to SeaORM? Let's take a look at the following:

trait StreamTrait<'a>: Send + Sync {
type Stream: Stream<Item = Result<QueryResult, DbErr>> + Send;

fn stream(
&'a self,
stmt: Statement,
) -> Pin<Box<dyn Future<Output = Result<Self::Stream, DbErr>> + 'a + Send>>;
}

You can see that the Future has a lifetime 'a, but as a side effect the lifetime is tied to StreamTrait.

With GAT, the lifetime can be elided:

trait StreamTrait: Send + Sync {
type Stream<'a>: Stream<Item = Result<QueryResult, DbErr>> + Send
where
Self: 'a;

fn stream<'a>(
&'a self,
stmt: Statement,
) -> Pin<Box<dyn Future<Output = Result<Self::Stream<'a>, DbErr>> + 'a + Send>>;
}

What benefit does it bring in practice? Consider you have a function that accepts a generic ConnectionTrait and calls stream():

async fn processor<'a, C>(conn: &'a C) -> Result<...>
where C: ConnectionTrait + StreamTrait<'a> {...}

The fact that the lifetime of the connection is tied to the stream can create confusion to the compiler, most likely when you are making transactions:

async fn do_transaction<C>(conn: &C) -> Result<...>
where C: ConnectionTrait + TransactionTrait
{
let txn = conn.begin().await?;
processor(&txn).await?;
txn.commit().await?;
}

But now, with the lifetime of the stream elided, it's much easier to work on streams inside transactions because the two lifetimes are now distinct and the stream's lifetime is implicit:

async fn processor<C>(conn: &C) -> Result<...>
where C: ConnectionTrait + StreamTrait {...}

Big thanks to @nappa85 for the contribution.


Below are some feature highlights 🌟:

Support Array Data Types in Postgres

[#1132] Support model field of type Vec<T>. (by @hf29h8sh321, @ikrivosheev, @tyt2y3, @billy1624)

You can define a vector of types that are already supported by SeaORM in the model.

#[derive(Clone, Debug, PartialEq, DeriveEntityModel)]
#[sea_orm(table_name = "collection")]
pub struct Model {
#[sea_orm(primary_key)]
pub id: i32,
pub integers: Vec<i32>,
pub integers_opt: Option<Vec<i32>>,
pub floats: Vec<f32>,
pub doubles: Vec<f64>,
pub strings: Vec<String>,
}

Keep in mind that you need to enable the postgres-array feature and this is a Postgres only feature.

sea-orm = { version = "0.10", features = ["postgres-array", ...] }

Better Error Types

[#750, #1002] Error types with parsable database specific error. (by @mohs8421, @tyt2y3)

let mud_cake = cake::ActiveModel {
id: Set(1),
name: Set("Moldy Cake".to_owned()),
price: Set(dec!(10.25)),
gluten_free: Set(false),
serial: Set(Uuid::new_v4()),
bakery_id: Set(None),
};

// Insert a new cake with its primary key (`id` column) set to 1.
let cake = mud_cake.save(db).await.expect("could not insert cake");

// Insert the same row again and it failed
// because primary key of each row should be unique.
let error: DbErr = cake
.into_active_model()
.insert(db)
.await
.expect_err("inserting should fail due to duplicate primary key");

match error {
DbErr::Exec(RuntimeErr::SqlxError(error)) => match error {
Error::Database(e) => {
// We check the error code thrown by the database (MySQL in this case),
// `23000` means `ER_DUP_KEY`: we have a duplicate key in the table.
assert_eq!(e.code().unwrap(), "23000");
}
_ => panic!("Unexpected sqlx-error kind"),
},
_ => panic!("Unexpected Error kind"),
}

Run Migration on Any Postgres Schema

[#1056] By default migration will be run on the public schema, you can now override it when running migration on the CLI or programmatically. (by @MattGson, @nahuakang, @billy1624)

For CLI, you can specify the target schema with -s / --database_schema option:

  • via sea-orm-cli: sea-orm-cli migrate -u postgres://root:root@localhost/database -s my_schema
  • via SeaORM migrator: cargo run -- -u postgres://root:root@localhost/database -s my_schema

You can also run the migration on the target schema programmatically:

let connect_options = ConnectOptions::new("postgres://root:root@localhost/database".into())
.set_schema_search_path("my_schema".into()) // Override the default schema
.to_owned();

let db = Database::connect(connect_options).await?

migration::Migrator::up(&db, None).await?;

Breaking Changes

enum ColumnType {
// then
Enum(String, Vec<String>)

// now
Enum {
/// Name of enum
name: DynIden,
/// Variants of enum
variants: Vec<DynIden>,
}
...
}
  • A new method array_type was added to ValueType:
impl sea_orm::sea_query::ValueType for MyType {
fn array_type() -> sea_orm::sea_query::ArrayType {
sea_orm::sea_query::ArrayType::TypeName
}
...
}
  • ActiveEnum::name() changed return type to DynIden:
#[derive(Debug, Iden)]
#[iden = "category"]
pub struct CategoryEnum;

impl ActiveEnum for Category {
// then
fn name() -> String {
"category".to_owned()
}

// now
fn name() -> DynIden {
SeaRc::new(CategoryEnum)
}
...
}

SeaORM Enhancements

CLI Enhancements

Please check here for the complete changelog.

Integration Examples

SeaORM plays well with the other crates in the async ecosystem. We maintain an array of example projects for building REST, GraphQL and gRPC services. More examples wanted!

Our GitHub Sponsor profile is up! If you feel generous, a small donation will be greatly appreciated.

A big shout out to our sponsors 😇:

Émile Fugulin
Dean Sheather
Shane Sveller
Sakti Dwi Cahyono
Henrik Giesel
Jacob Trueb
Marcus Buffett
Unnamed Sponsor
Unnamed Sponsor

Community

SeaQL is a community driven project. We welcome you to participate, contribute and together build for Rust's future.

Here is the roadmap for SeaORM 0.11.x.

· 2 min read
Billy Chan

Not long ago we opened a PR "Toggle stacked download graph #5010" resolving Convert download chart from stacked chart to regular chart #3876 for crates.io.

What's it all about?

Problem

The download graph on crates.io used to be a stacked graph. With download count of older versions stack on top of newer versions. You might misinterpret the numbers. Consider this, at the first glance, it seems that version 0.9.2 has 1,500+ downloads on Nov 7. But in fact, it has only 237 downloads that day because the graph is showing the cumulative downloads.

crates.io Stacked Download Graph

This makes it hard to compare the download trend of different versions over time. Why this is important? You may ask. It's important to observe the adoption rate of newer version upon release. This paints a general picture if existing users are upgrading to newer version or not.

Solution

The idea is simple but effective: having a dropdown to toggle between stacked and unstacked download graph. With this, one can switch between both display mode, comparing the download trend of different version and observing the most download version in the past 90 days are straightforward and intuitive.

crates.io Unstacked Download Graph

Conclusion

This is a great tool for us to gauge the adoption rate of our new releases and we highly encourage user upgrading to newer release that contains feature updates and bug fixes.

· 5 min read
SeaQL Team

🎉 We are pleased to release SeaQuery 0.27.0! Here are some feature highlights 🌟:

Dependency Upgrade

[#356] We have upgraded a major dependency:

  • Upgrade sqlx to 0.6.1

You might need to upgrade the corresponding dependency in your application as well.

Drivers support

We have reworked the way drivers work in SeaQuery: priori to 0.27.0, users have to invoke the sea_query_driver_* macros. Now each driver sqlx, postgres & rusqlite has their own supporting crate, which integrates tightly with the corresponding libraries. Checkout our integration examples below for more details.

[#383] Deprecate sea-query-driver in favour of sea-query-binder

[#422] Rusqlite support is moved to sea-query-rusqlite

[#433] Postgres support is moved to sea-query-postgres

// before
sea_query::sea_query_driver_postgres!();
use sea_query_driver_postgres::{bind_query, bind_query_as};

let (sql, values) = Query::select()
.from(Character::Table)
.expr(Func::count(Expr::col(Character::Id)))
.build(PostgresQueryBuilder);

let row = bind_query(sqlx::query(&sql), &values)
.fetch_one(&mut pool)
.await
.unwrap();

// now
use sea_query_binder::SqlxBinder;

let (sql, values) = Query::select()
.from(Character::Table)
.expr(Func::count(Expr::col(Character::Id)))
.build_sqlx(PostgresQueryBuilder);

let row = sqlx::query_with(&sql, values)
.fetch_one(&mut pool)
.await
.unwrap();

// You can now make use of SQLx's `query_as_with` nicely:
let rows = sqlx::query_as_with::<_, StructWithFromRow, _>(&sql, values)
.fetch_all(&mut pool)
.await
.unwrap();

Support sub-query operators: EXISTS, ALL, ANY, SOME

[#118] Added sub-query operators: EXISTS, ALL, ANY, SOME

let query = Query::select()
.column(Char::Id)
.from(Char::Table)
.and_where(
Expr::col(Char::Id)
.eq(
Expr::any(
Query::select().column(Char::Id).from(Char::Table).take()
)
)
)
.to_owned();

assert_eq!(
query.to_string(MysqlQueryBuilder),
r#"SELECT `id` FROM `character` WHERE `id` = ANY(SELECT `id` FROM `character`)"#
);
assert_eq!(
query.to_string(PostgresQueryBuilder),
r#"SELECT "id" FROM "character" WHERE "id" = ANY(SELECT "id" FROM "character")"#
);

Support ON CONFLICT WHERE

[#366] Added support to ON CONFLICT WHERE

let query = Query::insert()
.into_table(Glyph::Table)
.columns([Glyph::Aspect, Glyph::Image])
.values_panic(vec![
2.into(),
3.into(),
])
.on_conflict(
OnConflict::column(Glyph::Id)
.update_expr((Glyph::Image, Expr::val(1).add(2)))
.target_and_where(Expr::tbl(Glyph::Table, Glyph::Aspect).is_null())
.to_owned()
)
.to_owned();

assert_eq!(
query.to_string(MysqlQueryBuilder),
r#"INSERT INTO `glyph` (`aspect`, `image`) VALUES (2, 3) ON DUPLICATE KEY UPDATE `image` = 1 + 2"#
);
assert_eq!(
query.to_string(PostgresQueryBuilder),
r#"INSERT INTO "glyph" ("aspect", "image") VALUES (2, 3) ON CONFLICT ("id") WHERE "glyph"."aspect" IS NULL DO UPDATE SET "image" = 1 + 2"#
);
assert_eq!(
query.to_string(SqliteQueryBuilder),
r#"INSERT INTO "glyph" ("aspect", "image") VALUES (2, 3) ON CONFLICT ("id") WHERE "glyph"."aspect" IS NULL DO UPDATE SET "image" = 1 + 2"#
);

Changed cond_where chaining semantics

[#414] Changed cond_where chaining semantics

// Before: will extend current Condition
assert_eq!(
Query::select()
.cond_where(any![Expr::col(Glyph::Id).eq(1), Expr::col(Glyph::Id).eq(2)])
.cond_where(Expr::col(Glyph::Id).eq(3))
.to_owned()
.to_string(PostgresQueryBuilder),
r#"SELECT WHERE "id" = 1 OR "id" = 2 OR "id" = 3"#
);
// Before: confusing, since it depends on the order of invocation:
assert_eq!(
Query::select()
.cond_where(Expr::col(Glyph::Id).eq(3))
.cond_where(any![Expr::col(Glyph::Id).eq(1), Expr::col(Glyph::Id).eq(2)])
.to_owned()
.to_string(PostgresQueryBuilder),
r#"SELECT WHERE "id" = 3 AND ("id" = 1 OR "id" = 2)"#
);
// Now: will always conjoin with `AND`
assert_eq!(
Query::select()
.cond_where(Expr::col(Glyph::Id).eq(1))
.cond_where(any![Expr::col(Glyph::Id).eq(2), Expr::col(Glyph::Id).eq(3)])
.to_owned()
.to_string(PostgresQueryBuilder),
r#"SELECT WHERE "id" = 1 AND ("id" = 2 OR "id" = 3)"#
);
// Now: so they are now equivalent
assert_eq!(
Query::select()
.cond_where(any![Expr::col(Glyph::Id).eq(2), Expr::col(Glyph::Id).eq(3)])
.cond_where(Expr::col(Glyph::Id).eq(1))
.to_owned()
.to_string(PostgresQueryBuilder),
r#"SELECT WHERE ("id" = 2 OR "id" = 3) AND "id" = 1"#
);

Added OnConflict::value and OnConflict::values

[#451] Implementation From<T> for any Into<Value> into SimpleExpr

// Before: notice the tuple
OnConflict::column(Glyph::Id).update_expr((Glyph::Image, Expr::val(1).add(2)))
// After: it accepts `Value` as well as `SimpleExpr`
OnConflict::column(Glyph::Id).value(Glyph::Image, Expr::val(1).add(2))

Improvement to ColumnDef::default

[#347] ColumnDef::default now accepts Into<SimpleExpr> instead Into<Value>

// Now we can write:
ColumnDef::new(Char::FontId)
.timestamp()
.default(Expr::current_timestamp())

Breaking Changes

  • [#386] Changed in_tuples interface to accept IntoValueTuple
  • [#320] Removed deprecated methods
  • [#440] CURRENT_TIMESTAMP changed from being a function to keyword
  • [#375] Update SQLite boolean type from integer to boolean`
  • [#451] Deprecated OnConflict::update_value, OnConflict::update_values, OnConflict::update_expr, OnConflict::update_exprs
  • [#451] Deprecated InsertStatement::exprs, InsertStatement::exprs_panic
  • [#451] Deprecated UpdateStatement::col_expr, UpdateStatement::value_expr, UpdateStatement::exprs
  • [#451] UpdateStatement::value now accept Into<SimpleExpr> instead of Into<Value>
  • [#451] Expr::case, CaseStatement::case and CaseStatement::finally now accepts Into<SimpleExpr> instead of Into<Expr>
  • [#460] InsertStatement::values, UpdateStatement::values now accepts IntoIterator<Item = SimpleExpr> instead of IntoIterator<Item = Value>
  • [#409] Use native api from SQLx for SQLite to work with time
  • [#435] Changed type of ColumnType::Enum from (String, Vec<String>) to Enum { name: DynIden, variants: Vec<DynIden>}

Miscellaneous Enhancements

  • [#336] Added support one dimension Postgres array for SQLx
  • [#373] Support CROSS JOIN
  • [#457] Added support DROP COLUMN for SQLite
  • [#466] Added YEAR, BIT and VARBIT types
  • [#338] Handle Postgres schema name for schema statements
  • [#418] Added %, << and >> binary operators
  • [#329] Added RAND function
  • [#425] Implements Display for Value
  • [#427] Added INTERSECT and EXCEPT to UnionType
  • [#448] OrderedStatement::order_by_customs, OrderedStatement::order_by_columns, OverStatement::partition_by_customs, OverStatement::partition_by_columns now accepts IntoIterator<Item = T> instead of Vec<T>
  • [#452] TableAlterStatement::rename_column, TableAlterStatement::drop_column, ColumnDef::new, ColumnDef::new_with_type now accepts IntoIden instead of Iden
  • [#426] Cleanup IndexBuilder trait methods
  • [#436] Introduce SqlWriter trait
  • [#448] Remove unneeded vec! from examples

Bug Fixes

  • [#449] distinct_on properly handles ColumnRef
  • [#461] Removed ON for DROP INDEX for SQLite
  • [#468] Change datetime string format to include microseconds
  • [#452] ALTER TABLE for PosgreSQL with UNIQUE constraint

Integration Examples

SeaQuery plays well with the other crates in the rust ecosystem.

Community

SeaQL is a community driven project. We welcome you to participate, contribute and together build for Rust's future.

· 6 min read
SeaQL Team

Seaography is a GraphQL framework for building GraphQL resolvers using SeaORM. It ships with a CLI tool that can generate ready-to-compile Rust projects from existing MySQL, Postgres and SQLite databases.

The design and implementation of Seaography can be found on our release blog post and documentation.

Extending a SeaORM project

Since Seaography is built on top of SeaORM, you can easily build a GraphQL server from a SeaORM project.

Start by adding Seaography and GraphQL dependencies to your Cargo.toml.

Cargo.toml
[dependencies]
sea-orm = { version = "^0.9", features = [ ... ] }
+ seaography = { version = "^0.1", features = [ "with-decimal", "with-chrono" ] }
+ async-graphql = { version = "4.0.10", features = ["decimal", "chrono", "dataloader"] }
+ async-graphql-poem = { version = "4.0.10" }

Then, derive a few macros on the SeaORM entities.

src/entities/film_actor.rs
use sea_orm::entity::prelude::*;

#[derive(
Clone,
Debug,
PartialEq,
DeriveEntityModel,
+ async_graphql::SimpleObject,
+ seaography::macros::Filter,
)]
+ #[graphql(complex)]
+ #[graphql(name = "FilmActor")]
#[sea_orm(table_name = "film_actor")]
pub struct Model {
#[sea_orm(primary_key, auto_increment = false)]
pub actor_id: i32,
#[sea_orm(primary_key, auto_increment = false)]
pub film_id: i32,
pub last_update: DateTimeUtc,
}

#[derive(
Copy,
Clone,
Debug,
EnumIter,
DeriveRelation,
+ seaography::macros::RelationsCompact,
)]
pub enum Relation {
#[sea_orm(
belongs_to = "super::film::Entity",
from = "Column::FilmId",
to = "super::film::Column::FilmId",
on_update = "Cascade",
on_delete = "NoAction"
)]
Film,
#[sea_orm(
belongs_to = "super::actor::Entity",
from = "Column::ActorId",
to = "super::actor::Column::ActorId",
on_update = "Cascade",
on_delete = "NoAction"
)]
Actor,
}

We also need to define QueryRoot for the GraphQL server. This define the GraphQL schema.

src/query_root.rs
#[derive(Debug, seaography::macros::QueryRoot)]
#[seaography(entity = "crate::entities::actor")]
#[seaography(entity = "crate::entities::film")]
#[seaography(entity = "crate::entities::film_actor")]
pub struct QueryRoot;
src/lib.rs
use sea_orm::prelude::*;

pub mod entities;
pub mod query_root;

pub use query_root::QueryRoot;

pub struct OrmDataloader {
pub db: DatabaseConnection,
}

Finally, create an executable to drive the GraphQL server.

src/main.rs
use async_graphql::{
dataloader::DataLoader,
http::{playground_source, GraphQLPlaygroundConfig},
EmptyMutation, EmptySubscription, Schema,
};
use async_graphql_poem::GraphQL;
use poem::{handler, listener::TcpListener, web::Html, IntoResponse, Route, Server};
use sea_orm::Database;
use seaography_example_project::*;
// ...

#[handler]
async fn graphql_playground() -> impl IntoResponse {
Html(playground_source(GraphQLPlaygroundConfig::new("/")))
}

#[tokio::main]
async fn main() {
// ...

let database = Database::connect(db_url).await.unwrap();
let orm_dataloader: DataLoader<OrmDataloader> = DataLoader::new(
OrmDataloader { db: database.clone() },
tokio::spawn,
);

let schema = Schema::build(QueryRoot, EmptyMutation, EmptySubscription)
.data(database)
.data(orm_dataloader)
.finish();

let app = Route::new()
.at("/", get(graphql_playground)
.post(GraphQL::new(schema)));

Server::new(TcpListener::bind("0.0.0.0:8000"))
.run(app)
.await
.unwrap();
}

Generating a project from database

If all you have is a database schema, good news! You can setup a GraphQL server without writing a single line of code.

Install seaography-cli, it helps you generate SeaORM entities along with a full Rust project based on a database schema.

cargo install seaography-cli

Run seaography-cli to generate code for the GraphQL server.

# The command take three arguments
seaography-cli <DATABASE_URL> <CRATE_NAME> <DESTINATION>

# MySQL
seaography-cli mysql://root:root@localhost/sakila seaography-mysql-example examples/mysql
# PostgreSQL
seaography-cli postgres://root:root@localhost/sakila seaography-postgres-example examples/postgres
# SQLite
seaography-cli sqlite://examples/sqlite/sakila.db seaography-sqlite-example examples/sqliteql

Checkout the example projects

We have the following examples for you, alongside with the SQL scripts to initialize the database.

All examples provide a web-based GraphQL playground when running, so you can inspect the GraphQL schema and make queries. We also hosted a demo GraphQL playground in case you can't wait to play with it.

Starting the GraphQL Server

Your GraphQL server is ready to launch! Go to the Rust project root then execute cargo run to spin it up.

$ cargo run

Playground: http://localhost:8000

Visit the GraphQL playground at http://localhost:8000

GraphQL Playground

Query Data via GraphQL

Let say we want to get the first 3 films released on or after year 2006 sorted in ascending order of its title.

{
film(
pagination: { limit: 3, page: 0 }
filters: { releaseYear: { gte: "2006" } }
orderBy: { title: ASC }
) {
data {
filmId
title
description
releaseYear
filmActor {
actor {
actorId
firstName
lastName
}
}
}
pages
current
}
}

We got the following JSON result after running the GraphQL query.

{
"data": {
"film": {
"data": [
{
"filmId": 1,
"title": "ACADEMY DINOSAUR",
"description": "An Epic Drama of a Feminist And a Mad Scientist who must Battle a Teacher in The Canadian Rockies",
"releaseYear": "2006",
"filmActor": [
{
"actor": {
"actorId": 1,
"firstName": "PENELOPE",
"lastName": "GUINESS"
}
},
{
"actor": {
"actorId": 10,
"firstName": "CHRISTIAN",
"lastName": "GABLE"
}
},
// ...
]
},
{
"filmId": 2,
"title": "ACE GOLDFINGER",
"description": "A Astounding Epistle of a Database Administrator And a Explorer who must Find a Car in Ancient China",
"releaseYear": "2006",
"filmActor": [
// ...
]
},
// ...
],
"pages": 334,
"current": 0
}
}
}

Behind the scene, the following SQL were queried:

SELECT "film"."film_id",
"film"."title",
"film"."description",
"film"."release_year",
"film"."language_id",
"film"."original_language_id",
"film"."rental_duration",
"film"."rental_rate",
"film"."length",
"film"."replacement_cost",
"film"."rating",
"film"."special_features",
"film"."last_update"
FROM "film"
WHERE "film"."release_year" >= '2006'
ORDER BY "film"."title" ASC
LIMIT 3 OFFSET 0

SELECT "film_actor"."actor_id", "film_actor"."film_id", "film_actor"."last_update"
FROM "film_actor"
WHERE "film_actor"."film_id" IN (1, 3, 2)

SELECT "actor"."actor_id", "actor"."first_name", "actor"."last_name", "actor"."last_update"
FROM "actor"
WHERE "actor"."actor_id" IN (24, 162, 20, 160, 1, 188, 123, 30, 53, 40, 2, 64, 85, 198, 10, 19, 108, 90)

Under the hood, Seaography uses async_graphql::dataloader in querying nested objects to tackle the N+1 problem.

To learn more, checkout the Seaography Documentation.

Conclusion

Seaography is an ergonomic library that turns SeaORM entities into GraphQL nodes. It provides a set of utilities and combined with a code generator makes GraphQL API building a breeze.

However, Seaography is still a new-born. Like all other open-source projects developed by passionate Rust developers, you can contribute to it if you also find the concept interesting. With its addition to the SeaQL ecosystem, we are one step closer to the vision of Rust being the best tool for data engineering.

People

Seaography is created by:

Panagiotis Karatakis
Summer of Code Contributor; developer of Seaography
Chris Tsang
Summer of Code Mentor; lead developer of SeaQL
Billy Chan
Summer of Code Mentor; core member of SeaQL

· 4 min read
SeaQL Team

What a fruitful Summer of Code! Today, we are excited to introduce Seaography to the SeaQL community. Seaography is a GraphQL framework for building GraphQL resolvers using SeaORM. It ships with a CLI tool that can generate ready-to-compile Rust projects from existing MySQL, Postgres and SQLite databases.

Motivation

We observed that other ecosystems have similar tools such as PostGraphile and Hasura allowing users to query a database via GraphQL with minimal effort upfront. We decided to bring that seamless experience to the Rust ecosystem.

For existing SeaORM users, adding a GraphQL API is straight forward. Start by adding seaography and async-graphql dependencies to your crate. Then, deriving a few extra derive macros to the SeaORM entities. Finally, spin up a GraphQL server to serve queries!

If you are new to SeaORM, no worries, we have your back. You only need to provide a database connection, and seaography-cli will generate the SeaORM entities together with a complete Rust project!

Design

We considered two approaches in our initial discussion: 1) blackbox query engine 2) code generator. The drawback with a blackbox query engine is it's difficult to customize or extend its behaviour, making it difficult to develop and operate in the long run. We opted the code generator approach, giving users full control and endless possibilities with the versatile async Rust ecosystem.

This project is separated into the following crates:

  • seaography: The facade crate; exporting macros, structures and helper functions to turn SeaORM entities into GraphQL nodes.

  • seaography-cli: The CLI tool; it generates SeaORM entities along with a full Rust project based on a user-provided database.

  • seaography-discoverer: A helper crate used by the CLI tool to discover the database schema and transform into a generic format.

  • seaography-generator: A helper crate used by the CLI tool to consume the database schema and generate a full Rust project.

  • seaography-derive: A set of procedural macros to derive types and trait implementations on SeaORM entities, turning them into GraphQL nodes.

Features

  • Relational query (1-to-1, 1-to-N)
  • Pagination on query's root entity
  • Filter with operators (e.g. gt, lt, eq)
  • Order by any column

Getting Started

To quick start, we have the following examples for you, alongside with the SQL scripts to initialize the database.

All examples provide a web-based GraphQL playground when running, so you can inspect the GraphQL schema and make queries. We also hosted a demo GraphQL playground in case you can't wait to play with it.

For more documentation, visit www.sea-ql.org/Seaography.

What's Next?

This project passed the first milestone shipping the essential features, but it still has a long way to go. The next milestone would be:

  • Query enhancements
    • Filter related queries
    • Filter based on related queries properties
    • Paginate related queries
    • Order by related queries
  • Cursor based pagination
  • Single entity query
  • Mutations
    • Insert single entity
    • Insert batch entities
    • Update single entity
    • Update batch entities using filter
    • Delete single entity
    • Delete batch entities

Conclusion

Seaography is an ergonomic library that turns SeaORM entities into GraphQL nodes. It provides a set of utilities and combined with a code generator makes GraphQL API building a breeze.

However, Seaography is still a new-born. Like all other open-source projects developed by passionate Rust developers, you can contribute to it if you also find the concept interesting. With its addition to the SeaQL ecosystem, we are one step closer to the vision of Rust being the best tool for data engineering.

People

Seaography is created by:

Panagiotis Karatakis
Summer of Code Contributor; developer of Seaography
Chris Tsang
Summer of Code Mentor; lead developer of SeaQL
Billy Chan
Summer of Code Mentor; core member of SeaQL

· 6 min read
SeaQL Team

We are celebrating the milestone of reaching 3,000 GitHub stars across all SeaQL repositories!

This wouldn't have happened without your support and contribution, so we want to thank the community for being with us along the way.

The Journey

SeaQL.org was founded back in 2020. We devoted ourselves into developing open source libraries that help Rust developers to build data intensive applications. In the past two years, we published and maintained four open source libraries: SeaQuery, SeaSchema, SeaORM and StarfishQL. Each library is designed to fill a niche in the Rust ecosystem, and they are made to play well with other Rust libraries.

2020

  • Oct 2020: SeaQL founded
  • Dec 2020: SeaQuery first released

2021

  • Apr 2021: SeaSchema first released
  • Aug 2021: SeaORM first released
  • Nov 2021: SeaORM reached 0.4.0
  • Dec 2021: SeaQuery reached 0.20.0
  • Dec 2021: SeaSchema reached 0.4.0

2022

  • Apr 2022: SeaQL selected as a Google Summer of Code 2022 mentor organization
  • Apr 2022: StarfishQL first released
  • Jul 2022: SeaQuery reached 0.26.2
  • Jul 2022: SeaSchema reached 0.9.3
  • Jul 2022: SeaORM reached 0.9.1
  • Aug 2022: SeaQL reached 3,000+ GitHub stars

Where're We Now?

We're pleased by the adoption by the Rust community. We couldn't make it this far without your feedback and contributions.

4 📦
Open source projects
5 🏬
Startups using SeaQL
1,972 🎈
Dependent projects
131 👨‍👩‍👧‍👦
Contributors
1,061 ✅
Merged PRs & resolved issues
3,158 ⭐
GitHub stars
432 🗣️
Discord members
87,937 ⌨️
Lines of Rust
667,769 💿
Downloads on crates.io

* as of Aug 12

Core Members

Our team has grown from two people initially into four. We always welcome passionate engineers to join us!

Chris Tsang
Founder. Led the initial development and maintaining the projects.
Billy Chan
Founding member. Contributed many features and bug fixes. Keeps the community alive.
Ivan Krivosheev
Joined in 2022. Contributed many features and bug fixes, most notably to SeaQuery.
Sanford Pun
Developed StarfishQL and wrote SeaORM's tutorial.

Special Thanks

Marco Napetti
Contributed transaction, streaming and tracing API to SeaORM.
nitnelave
Contributed binder crate and other improvements to SeaQuery.
Sam Samai
Developed SeaORM's test suite and demo schema.
Daniel Lyne
Developed SeaSchema's Postgres implementation.
Charles Chege
Developed SeaSchema's SQLite implementation.

Sponsors

If you are feeling generous, a small donation will be greatly appreciated.

A big shout out to our sponsors 😇:

Émile Fugulin
Dean Sheather
Shane Sveller
Sakti Dwi Cahyono
Unnamed Sponsor
Unnamed Sponsor

Contributors

Many features and enhancements are actually proposed and implemented by the community. We want to take this chance to thank all our contributors!

What's Next?

We have two ongoing Summer of Code 2022 projects to enrich the SeaQL ecosystem, planning to be released later this year. In the meantime, we're focusing on improving existing SeaQL libraries until reaching version 1.0, we'd love to hear comments and feedback from the community.

If you like what we do, consider starring, commenting, sharing, contributing and together building for Rust's future!

· 3 min read
SeaQL Team

🎉 We are pleased to release SeaQuery 0.26.0! Here are some feature highlights 🌟:

Dependency Upgrades

[#356] We have upgraded a few major dependencies:

Note that you might need to upgrade the corresponding dependency on your application as well.

VALUES lists

[#351] Add support for VALUES lists

// SELECT * FROM (VALUES (1, 'hello'), (2, 'world')) AS "x"
let query = SelectStatement::new()
.expr(Expr::asterisk())
.from_values(vec![(1i32, "hello"), (2, "world")], Alias::new("x"))
.to_owned();

assert_eq!(
query.to_string(PostgresQueryBuilder),
r#"SELECT * FROM (VALUES (1, 'hello'), (2, 'world')) AS "x""#
);

Introduce sea-query-binder

[#273] Native support SQLx without marcos

use sea_query_binder::SqlxBinder;

// Create SeaQuery query with prepare SQLx
let (sql, values) = Query::select()
.columns([
Character::Id,
Character::Uuid,
Character::Character,
Character::FontSize,
Character::Meta,
Character::Decimal,
Character::BigDecimal,
Character::Created,
Character::Inet,
Character::MacAddress,
])
.from(Character::Table)
.order_by(Character::Id, Order::Desc)
.build_sqlx(PostgresQueryBuilder);

// Execute query
let rows = sqlx::query_as_with::<_, CharacterStructChrono, _>(&sql, values)
.fetch_all(&mut pool)
.await?;

// Print rows
for row in rows.iter() {
println!("{:?}", row);
}

CASE WHEN statement support

[#304] Add support for CASE WHEN statement

let query = Query::select()
.expr_as(
CaseStatement::new()
.case(Expr::tbl(Glyph::Table, Glyph::Aspect).is_in(vec![2, 4]), Expr::val(true))
.finally(Expr::val(false)),
Alias::new("is_even")
)
.from(Glyph::Table)
.to_owned();

assert_eq!(
query.to_string(PostgresQueryBuilder),
r#"SELECT (CASE WHEN ("glyph"."aspect" IN (2, 4)) THEN TRUE ELSE FALSE END) AS "is_even" FROM "glyph""#
);

Add support for Ip(4,6)Network and MacAddress

[#309] Add support for Network types in PostgreSQL backend

Introduce sea-query-attr

[#296] Proc-macro for deriving Iden enum from struct

use sea_query::gen_type_def;

#[gen_type_def]
pub struct Hello {
pub name: String
}

println!("{:?}", HelloTypeDef::Name);

Add ability to alter foreign keys

[#299] Add support for ALTER foreign Keys

let foreign_key_char = TableForeignKey::new()
.name("FK_character_glyph")
.from_tbl(Char::Table)
.from_col(Char::FontId)
.from_col(Char::Id)
.to_tbl(Glyph::Table)
.to_col(Char::FontId)
.to_col(Char::Id)
.to_owned();

let table = Table::alter()
.table(Character::Table)
.add_foreign_key(&foreign_key_char)
.to_owned();

assert_eq!(
table.to_string(PostgresQueryBuilder),
vec![
r#"ALTER TABLE "character""#,
r#"ADD CONSTRAINT "FK_character_glyph""#,
r#"FOREIGN KEY ("font_id", "id") REFERENCES "glyph" ("font_id", "id")"#,
r#"ON DELETE CASCADE ON UPDATE CASCADE,"#,
]
.join(" ")
);

Select DISTINCT ON

[#250]

let query = Query::select()
.from(Char::Table)
.distinct_on(vec![Char::Character])
.column(Char::Character)
.column(Char::SizeW)
.column(Char::SizeH)
.to_owned();

assert_eq!(
query.to_string(PostgresQueryBuilder),
r#"SELECT DISTINCT ON ("character") "character", "size_w", "size_h" FROM "character""#
);

Miscellaneous Enhancements

  • [#353] Support LIKE ... ESCAPE ... expression
  • [#306] Move escape and unescape string to backend
  • [#365] Add method to make a column nullable
  • [#348] Add is & is_not to Expr
  • [#349] Add CURRENT_TIMESTAMP function
  • [#345] Add in_tuple method to Expr
  • [#266] Insert Default
  • [#324] Make sea-query-driver an optional dependency
  • [#334] Add ABS function
  • [#332] Support IF NOT EXISTS when create index
  • [#314] Support different blob types in MySQL
  • [#331] Add VarBinary column type
  • [#335] RETURNING expression supporting SimpleExpr

Integration Examples

SeaQuery plays well with the other crates in the rust ecosystem.

Community

SeaQL is a community driven project. We welcome you to participate, contribute and together build for Rust's future.

· 5 min read
Chris Tsang

It's hard to pin down the exact date, but I think SeaQL.org was setup in July 2020, a little over a year ago. Over the course of the year, SeaORM went from 0.1 to 0.9 and the number of users kept growing. I would like to outline our engineering process in this blog post, and perhaps it can serve as a reference or guidance to prospective contributors and the future maintainer of this project.

In the open source world, the Benevolent Dictator for Life (BDL) model underpins a number of successful open source projects. That's not me! As a maintainer, I believe in an open, bottom-up, iterative and progressive approach. Let me explain each of these words and what they mean to me.

Open

Open as in source availability, but also engineering. We always welcome new contributors! We'd openly discuss ideas and designs. I would often explain why a decision was made in the first place for various things. The project is structured not as a monorepo, but several interdependent repos. This reduces the friction for new contributors, because they can have a smaller field of vision to focus on solving one particular problem at hand.

Bottom-up

We rely on users to file feature requests, bug reports and of course pull requests to drive the project forward. The great thing is, for every feature / bug fix, there is a use case for it and a confirmation from a real user that it works and is reasonable. As a maintainer, I could not have first hand experience for all features and so could not understand some of the pain points.

Iterative

Open source software is imperfect, impermanent and incomplete. While I do have a grand vision in mind, we do not try rushing it all the way in one charge, nor keeping a project secret until it is 'complete'. Good old 'release early, release often' - we would release an initial working version of a tool, gather user feedback and improve upon it, often reimplementing a few things and break a few others - which brings us to the next point.

Progressive

Favour progression. Always look forward and leave legacy behind. It does not mean that we would arbitrary break things, but when a decision is made, we'd always imagine how the software should be without historic context. We'd provide migrate paths and encourage users to move forward with us. After all, Rust is a young and evolving language! You may or may not know that async was just stabilized in 2020.

Enough said for the philosophy, let's now talk about the actual engineering process.

1. Idea & Design

We first have some vague idea on what problem we want to tackle. As we put in more details to the use case, we can define the problem and brainstorm solutions. Then we look for workable ways to implement that in Rust.

2. Implementation

An initial proof of concept is appreciated. We iterate on the implementation to reduce the impact and improve the maintainability.

3. Testing

We rely on automated tests. Every feature should come with corresponding tests, and a release is good if and only if all tests are green. Which means for features not covered by our test suite, it is an uncertainty to when we would break them. So if certain undocumented feature is important to you, we encourage you to add that to our test suite.

4. Documentation

Coding is not complete without documentation. Rust doc tests kill two birds with one stone and so is greatly appreciated. For SeaORM we have separate documentation repository and tutorial repository. It takes a lot of effort to maintain those to be up to date, and right now it's mostly done by our core contributors.

5. Release

We run on a release train model, although the frequency varies. The ethos is to have small number breaking changes often. At one point, SeaQuery has a new release every week. SeaORM runs on monthly, although it more or less relaxes to bimonthly now. At any time, we maintain two branches, the latest release and master. PRs are always merged into master, and if it is non-breaking (and worthy) I would backport it to the release branch and make a minor release. At the end, I want to maintain momentum and move forward together with the community. Users can have a rough expectation on when merges will be released. And there are just lots of change we cannot avoid a breaking release as of the current state of the Rust ecosystem. Users are advised to upgrade regularly, and we ship along many small improvements to encourage that.

Conclusion

Open source software is a collaborative effort and thank you all who participated! Also a big thanks to SeaQL's core contributors who made wonders. If you have not already, I invite you to star all our repositories. If you want to support us materially, a small donation would make a big difference. SeaQL the organization is still in its infancy, and your support is vital to SeaQL's longevity and the prospect of the Rust community.

· 10 min read
SeaQL Team

🎉 We are pleased to release SeaORM 0.9.0 today! Here are some feature highlights 🌟:

Dependency Upgrades

[#834] We have upgraded a few major dependencies:

Note that you might need to upgrade the corresponding dependency on your application as well.

Proposed by:
Rob Gilson
boraarslan
Contributed by:
Billy Chan

Cursor Pagination

[#822] Paginate models based on column(s) such as the primary key.

// Create a cursor that order by `cake`.`id`
let mut cursor = cake::Entity::find().cursor_by(cake::Column::Id);

// Filter paginated result by `cake`.`id` > 1 AND `cake`.`id` < 100
cursor.after(1).before(100);

// Get first 10 rows (order by `cake`.`id` ASC)
let rows: Vec<cake::Model> = cursor.first(10).all(db).await?;

// Get last 10 rows (order by `cake`.`id` DESC but rows are returned in ascending order)
let rows: Vec<cake::Model> = cursor.last(10).all(db).await?;
Proposed by:
Lucas Berezy
Contributed by:
Émile Fugulin
Billy Chan

Insert On Conflict

[#791] Insert an active model with on conflict behaviour.

let orange = cake::ActiveModel {
id: ActiveValue::set(2),
name: ActiveValue::set("Orange".to_owned()),
};

// On conflict do nothing:
// - INSERT INTO "cake" ("id", "name") VALUES (2, 'Orange') ON CONFLICT ("name") DO NOTHING
cake::Entity::insert(orange.clone())
.on_conflict(
sea_query::OnConflict::column(cake::Column::Name)
.do_nothing()
.to_owned()
)
.exec(db)
.await?;

// On conflict do update:
// - INSERT INTO "cake" ("id", "name") VALUES (2, 'Orange') ON CONFLICT ("name") DO UPDATE SET "name" = "excluded"."name"
cake::Entity::insert(orange)
.on_conflict(
sea_query::OnConflict::column(cake::Column::Name)
.update_column(cake::Column::Name)
.to_owned()
)
.exec(db)
.await?;
Proposed by:
baoyachi. Aka Rust Hairy crabs
Contributed by:
liberwang1013

Join Table with Custom Conditions and Table Alias

[#793, #852] Click Custom Join Conditions and Custom Joins to learn more.

assert_eq!(
cake::Entity::find()
.column_as(
Expr::tbl(Alias::new("fruit_alias"), fruit::Column::Name).into_simple_expr(),
"fruit_name"
)
.join_as(
JoinType::LeftJoin,
cake::Relation::Fruit
.def()
.on_condition(|_left, right| {
Expr::tbl(right, fruit::Column::Name)
.like("%tropical%")
.into_condition()
}),
Alias::new("fruit_alias")
)
.build(DbBackend::MySql)
.to_string(),
[
"SELECT `cake`.`id`, `cake`.`name`, `fruit_alias`.`name` AS `fruit_name` FROM `cake`",
"LEFT JOIN `fruit` AS `fruit_alias` ON `cake`.`id` = `fruit_alias`.`cake_id` AND `fruit_alias`.`name` LIKE '%tropical%'",
]
.join(" ")
);
Proposed by:
Chris Tsang
Tuetuopay
Loïc
Contributed by:
Billy Chan
Matt
liberwang1013

(de)serialize Custom JSON Type

[#794] JSON stored in the database could be deserialized into custom struct in Rust.

#[derive(Clone, Debug, PartialEq, DeriveEntityModel)]
#[sea_orm(table_name = "json_struct")]
pub struct Model {
#[sea_orm(primary_key)]
pub id: i32,
// JSON column defined in `serde_json::Value`
pub json: Json,
// JSON column defined in custom struct
pub json_value: KeyValue,
pub json_value_opt: Option<KeyValue>,
}

// The custom struct must derive `FromJsonQueryResult`, `Serialize` and `Deserialize`
#[derive(Clone, Debug, PartialEq, Serialize, Deserialize, FromJsonQueryResult)]
pub struct KeyValue {
pub id: i32,
pub name: String,
pub price: f32,
pub notes: Option<String>,
}
Proposed by:
Mara Schulke
Chris Tsang
Contributed by:
Billy Chan

Derived Migration Name

[#736] Introduce DeriveMigrationName procedural macros to infer migration name from the file name.

use sea_orm_migration::prelude::*;

// Used to be...
pub struct Migration;

impl MigrationName for Migration {
fn name(&self) -> &str {
"m20220120_000001_create_post_table"
}
}

// Now... derive `DeriveMigrationName`,
// no longer have to specify the migration name explicitly
#[derive(DeriveMigrationName)]
pub struct Migration;

#[async_trait::async_trait]
impl MigrationTrait for Migration {
async fn up(&self, manager: &SchemaManager) -> Result<(), DbErr> {
manager
.create_table( ... )
.await
}

async fn down(&self, manager: &SchemaManager) -> Result<(), DbErr> {
manager
.drop_table( ... )
.await
}
}
Proposed by:
Chris Tsang
Contributed by:
smonv
Lukas Potthast
Billy Chan

SeaORM CLI Improvements

  • [#735] Improve logging of generate entity command
  • [#588] Generate enum with numeric like variants
  • [#755] Allow old pending migration to be applied
  • [#837] Skip generating entity for ignored tables
  • [#724] Generate code for time crate
  • [#850] Add various blob column types
  • [#422] Generate entity files with Postgres's schema name
  • [#851] Skip checking connection string for credentials
Proposed & Contributed by:
ttys3
kyoto7250
yb3616
Émile Fugulin
Bastian
Nahua
Mike
Frank Horvath
Maikel Wever

Miscellaneous Enhancements

  • [#800] Added sqlx_logging_level to ConnectOptions
  • [#768] Added num_items_and_pages to Paginator
  • [#849] Added TryFromU64 for time
  • [#853] Include column name in TryGetError::Null
  • [#778] Refactor stream metrics
Proposed & Contributed by:
SandaruKasa
Eric
Émile Fugulin
Renato Dinhani
kyoto7250
Marco Napetti

Integration Examples

SeaORM plays well with the other crates in the async ecosystem. We maintain an array of example projects for building REST, GraphQL and gRPC services. More examples wanted!

Our GitHub Sponsor profile is up! If you feel generous, a small donation will be greatly appreciated.

A big shout out to our sponsors 😇:

Émile Fugulin
Dean Sheather
Shane Sveller
Sakti Dwi Cahyono
Unnamed Sponsor
Unnamed Sponsor

Community

SeaQL is a community driven project. We welcome you to participate, contribute and together build for Rust's future.

Here is the roadmap for SeaORM 0.10.x.

· 4 min read
SeaQL Team

We are thrilled to announce that we will bring in four contributors this summer! Two of them are sponsored by Google while two of them are sponsored by SeaQL.

A GraphQL Framework on Top of SeaORM

Panagiotis Karatakis

I'm Panagiotis, I live in Athens Greece and currently I pursue my second bachelors on economic sciences. My first bachelors was on computer science and I've a great passion on studying and implementing enterprise software solutions. I know Rust the last year and I used it almost daily for a small startup project that me and my friends build for a startup competition.

I'll be working on creating a CLI tool that will explore a database schema and then generate a ready to build async-graphql API. The tool will allow quick integration with the SeaQL and Rust ecosystems as well as GraphQL. To be more specific, for database exploring I'll use sea-schema and sea-orm-codegen for entity generation, my job is to glue those together with async-graphql library. You can read more here.

SQL Interpreter for Mock Testing

Samyak Sarnayak

I'm Samyak Sarnayak, a final year Computer Science student from Bangalore, India. I started learning Rust around 6-7 months ago and it feels like I have found the perfect language for me :D. It does not have a runtime, has a great type system, really good compiler errors, good tooling, some functional programming patterns and metaprogramming. You can find more about me on my GitHub profile.

I'll be working on a new SQL interpreter for mock testing. This will be built specifically for testing and so the emphasis will be on correctness - it can be slow but the operations must always be correct. I'm hoping to build a working version of this and integrate it into the existing tests of SeaORM. Here is the discussion for this project.

Support TiDB in the SeaQL Ecosystem

Edit: This project was canceled.

Query Linter for SeaORM

Edit: This project was canceled.

Mentors

Chris Tsang

I am a strong believer in open source. I started my GitHub journey 10 years ago, when I published my first programming library. I had been looking for a programming language with speed, ergonomic and expressiveness. Until I found Rust.

Seeing a niche and demand for data engineering tools in the Rust ecosystem, I founded SeaQL in 2020 and have been leading the development and maintaining the libraries since then.


Billy Chan

Hey, this is Billy from Hong Kong. I've been using open-source libraries ever since I started coding but it's until 2020, I dedicated myself to be a Rust open-source developer.

I was also a full-stack developer specialized in formulating requirement specifications for user interfaces and database structures, implementing and testing both frontend and backend from ground up, finally releasing the MVP for production and maintaining it for years to come.

I enjoy working with Rustaceans across the globe, building a better and sustainable ecosystem for Rust community. If you like what we do, consider starring, commenting, sharing and contributing, it would be much appreciated.


Sanford Pun

I'm Sanford, an enthusiastic software engineer who enjoys problem-solving! I've worked on Rust for a couple of years now. During my early days with Rust, I focused more on the field of graphics/image processing, where I fell in love with what the language has to offer! This year, I've been exploring data engineering in the StarfishQL project.

A toast to the endless potential of Rust!

Community

If you are interested in the projects and want to share your thoughts, please star and watch the SeaQL/summer-of-code repository on GitHub and join us on our Discord server!

· 5 min read
SeaQL Team

🎉 We are pleased to release SeaORM 0.8.0 today! Here are some feature highlights 🌟:

Migration Utilities Moved to sea-orm-migration crate

[#666] Utilities of SeaORM migration have been moved from sea-schema to sea-orm-migration crate. Users are advised to upgrade from older versions with the following steps:

  1. Bump sea-orm version to 0.8.0.

  2. Replace sea-schema dependency with sea-orm-migration in your migration crate.

    migration/Cargo.toml
    [dependencies]
    - sea-schema = { version = "^0.7.0", ... }
    + sea-orm-migration = { version = "^0.8.0" }
  3. Find and replace use sea_schema::migration:: with use sea_orm_migration:: in your migration crate.

    - use sea_schema::migration::prelude::*;
    + use sea_orm_migration::prelude::*;

    - use sea_schema::migration::*;
    + use sea_orm_migration::*;
Designed by:

Chris Tsang
Contributed by:

Billy Chan

Generating New Migration

[#656] You can create a new migration with the migrate generate subcommand. This simplifies the migration process, as new migrations no longer need to be added manually.

# A migration file `MIGRATION_DIR/src/mYYYYMMDD_HHMMSS_create_product_table.rs` will be created.
# And, the migration file will be imported and included in the migrator located at `MIGRATION_DIR/src/lib.rs`.
sea-orm-cli migrate generate create_product_table
Proposed & Contributed by:

Viktor Bahr

Inserting One with Default

[#589] Insert a row populate with default values. Note that the target table should have default values defined for all of its columns.

let pear = fruit::ActiveModel {
..Default::default() // all attributes are `NotSet`
};

// The SQL statement:
// - MySQL: INSERT INTO `fruit` VALUES ()
// - SQLite: INSERT INTO "fruit" DEFAULT VALUES
// - PostgreSQL: INSERT INTO "fruit" VALUES (DEFAULT) RETURNING "id", "name", "cake_id"
let pear: fruit::Model = pear.insert(db).await?;
Proposed by:

Crypto-Virus
Contributed by:

Billy Chan

Checking if an ActiveModel is changed

[#683] You can check whether any field in an ActiveModel is Set with the help of the is_changed method.

let mut fruit: fruit::ActiveModel = Default::default();
assert!(!fruit.is_changed());

fruit.set(fruit::Column::Name, "apple".into());
assert!(fruit.is_changed());
Proposed by:

Karol Fuksiewicz
Contributed by:

Kirawi

Minor Improvements

  • [#670] Add max_connections option to sea-orm-cli generate entity subcommand
  • [#677] Derive Eq and Clone for DbErr
Proposed & Contributed by:

benluelo

Sebastien Guillemot

Integration Examples

SeaORM plays well with the other crates in the async ecosystem. It can be integrated easily with common RESTful frameworks and also gRPC frameworks; check out our new Tonic example to see how it works. More examples wanted!

Who's using SeaORM?

The following products are powered by SeaORM:



A lightweight web security auditing toolkit

The enterprise ready webhooks service

A personal search engine

SeaORM is the foundation of StarfishQL, an experimental graph database and query engine.

For more projects, see Built with SeaORM.

Our GitHub Sponsor profile is up! If you feel generous, a small donation will be greatly appreciated.

A big shout out to our sponsors 😇:

Émile Fugulin
Zachary Vander Velden
Dean Sheather
Shane Sveller
Sakti Dwi Cahyono
Unnamed Sponsor

Community

SeaQL is a community driven project. We welcome you to participate, contribute and together build for Rust's future.

Here is the roadmap for SeaORM 0.9.x.

GSoC 2022

We are super excited to be selected as a Google Summer of Code 2022 mentor organization. The application is now closed, but the program is about to start! If you have thoughts over how we are going to implement the project ideas, feel free to participate in the discussion.

· 2 min read
Chris Tsang

FAQ.01 Why SeaORM does not nest objects for parent-child relation?

let cake_with_fruits: Vec<(cake::Model, Vec<fruit::Model>)> =
Cake::find().find_with_related(Fruit).all(db).await?;

Consider the above API, Cake and Fruit are two separate models.

If you come from a dynamic language, you'd probably used to:

struct Cake {
id: u64,
fruit: Fruit,
..
}

It's so convenient that you can simply:

let cake = Cake::find().one(db).await?;
println!("Fruit = {}", cake.fruit.name);

Sweet right? Okay so, the problem with this pattern is that it does not fit well with Rust.

Let's look at this playground: https://play.rust-lang.org/?version=stable&mode=debug&edition=2021&gist=6fb0a981189ace081fbb2aa04f50146b

struct Parent {
a: u64,
child: Option<Child>,
}

struct ParentWithBox {
a: u64,
child: Option<Box<Child>>,
}

struct Child {
a: u64,
b: u64,
c: u64,
d: u64,
}

fn main() {
dbg!(std::mem::size_of::<Parent>());
dbg!(std::mem::size_of::<ParentWithBox>());
dbg!(std::mem::size_of::<Child>());
}

What's the output you guess?

[src/main.rs:21] std::mem::size_of::<Parent>() = 48
[src/main.rs:22] std::mem::size_of::<ParentWithBox>() = 16
[src/main.rs:23] std::mem::size_of::<Child>() = 32

In dynamic languages, objects are always held by pointers, and that maps to a Box in Rust. In Rust, we don't put objects in Boxes by default, because it forces the object to be allocated on the heap. And that is an extra cost! Because objects are always first constructed on the stack and then being copied over to the heap.

Ref:

  1. https://users.rust-lang.org/t/how-to-create-large-objects-directly-in-heap/26405
  2. https://github.com/PoignardAzur/placement-by-return/blob/placement-by-return/text/0000-placement-by-return.md

We face the dilemma where we either put the object on the stack and waste some space (it takes up 48 bytes no matter child is None or not) or put the object in a box and waste some cycles.

If you are new to Rust, all these might be unfamiliar, but a Rust programmer has to consciously make decisions over memory management, and we don't want to make decisions on behalf of our users.

That said, there were proposals to add API with this style to SeaORM, and we might implement that in the future. Hopefully this would shed some light on the matter meanwhile.

· 8 min read
SeaQL Team

We are pleased to introduce StarfishQL to the Rust community today. StarfishQL is a graph database and query engine to enable graph analysis and visualization on the web. It is an experimental project, with its primary purpose to explore the dependency network of Rust crates published on crates.io.

Motivation

StarfishQL is a framework for providing a graph database and a graph query engine that interacts with it.

A concrete example (Freeport) involving the graph of crate dependency on crates.io is used for illustration. With this example, you can see StarfishQL in action.

At the end of the day, we're interested in performing graph analysis, that is to extract meaningful information out of plain graph data. To achieve that, we believe that visualization is a crucial aid.

StarfishQL's query engine is designed to be able to incorporate different forms of visualization by using a flexible query language. However, the development of the project has been centred around the following, as showcased in our demo apps.

Traverse the dependency graph in the normal direction starting from the N most connected nodes.

Traverse the dependency tree in both forward and reverse directions starting from a particular node.

Design

In general, a query engine takes input queries written in a specific query language (e.g. SQL statements), performs the necessary operations in the database, and then outputs the data of interest to the user application. You may also view a query engine as an abstraction layer such that the user can design queries simply in the supported query language and let the query engine do the rest.

In the case of a graph query engine, the output data is a graph (wiki).

Graph query engine overview

In the case of StarfishQL, the query language is a custom language we defined in the JSON format, which enables the engine to be highly accessible and portable.

Implementation

In the example of Freeport, StarfishQL consists of the following three components.

Graph Query Engine

As a core component of StarfishQL, the graph query engine is a Rust backend application powered by the Rocket web framework and the SeaQL ecosystem.

The engine listens at the following endpoints for the corresponding operation:

You could also invoke the endpoints above programmatically.

Graph data are stored in a relational database:

  • Metadata - Definition of each entity and relation, e.g. attributes of crates and dependency
  • Node Data - An instance of an entity, e.g. crate name and version number
  • Edge Data - An instance of a relation, e.g. one crate depends on another

crates.io Crawler

To obtain the crate data to insert into the database, we used a fast, non-disruptive crawler on a local clone of the public index repo of crates.io.

Graph Visualization

We used d3.js to create force-directed graphs to display the results. The two colourful graphs above are such products.

Findings

Here are some interesting findings we made during the process.

Top-10 Dependencies

List of top 10 crates order by different decay modes.

Decay Mode: Immediate / Simple Connectivity
crateconnectivity
serde17,441
serde_json10,528
log9,220
clap6,323
thiserror5,547
rand5,340
futures5,263
lazy_static5,211
tokio5,168
chrono4,794
Decay Mode: Medium (.5) / Complex Connectivity
crateconnectivity
quote4,126
syn4,069
pure-rust-locales4,067
reqwest3,950
proc-macro23,743
num_threads3,555
value-bag3,506
futures-macro3,455
time-macros3,450
thiserror-impl3,416
Decay Mode: None / Compound Connectivity
crateconnectivity
unicode-xid54,982
proc-macro254,949
quote54,910
syn54,744
rustc-std-workspace-core51,650
libc51,645
serde_derive51,056
serde51,054
jobserver50,567
cc50,566

If we look at Decay Mode: Immediate, where the connectivity is simply the number of immediate dependants, we can see thatserde and serde_json are at the top. I guess that supports our decision of defining the query language in JSON.

Decay Mode: None tells another interesting story: when the connectivity is the entire tree of dependants, we are looking at the really core crates that are nested somewhere deeply inside the most crates. In other words, these are the ones that are built along with the most crates. Under this setting, the utility crates that interacts with the low-level, more fundamental aspects of Rust are ranked higher,like quote with syntax trees, proc-macro2 with procedural macros, and unicode-xid with Unicode checking.

Number of crates without Dependencies

19,369 out of 79,972 crates, or 24% of the crates, do not depend on any crates.

e.g. aa-a0,  ..., zyx_testzz-bufferz_table

In other words, about 76% of the crates are standing on the shoulders of giants! 💪

Number of crates without Dependants

53,910 out of 79,972 crates, or 67% of the crates, have no dependants, i.e. no other crates depend on them.

e.g. aa-a-bot,  ..., zzp-toolszzzz_table

We imagine many of those crates are binaries/executables, if only we could figure out a way to check that... 🤔

As of March 30, 2022

Conclusion

StarfishQL allows flexible and portable definition, manipulation, retrieval, and visualization of graph data.

The graph query engine built in Rust provides a nice interface for any web applications to access data in the relational graph database with stable performance and memory safety.

Admittedly, StarfishQL is still in its infancy, so every detail in the design and implementation is subject to change. Fortunately, the good thing about this is, like all other open-source projects developed by brilliant Rust developers, you can contribute to it if you also find the concept interesting. With its addition to the SeaQL ecosystem, together we are one step closer to the vision of Rust for data engineering.

People

StarfishQL is created by the following SeaQL team members:

Chris Tsang
Billy Chan
Sanford Pun

Contributing

We are super excited to be selected as a Google Summer of Code 2022 mentor organization!

StarfishQL is one of the GSoC project ideas that opens for development proposals. Join us on GSoC 2022 by following the instructions on GSoC Contributing Guide.

· 5 min read
SeaQL Team

🎉 We are pleased to release SeaORM 0.7.0 today! Here are some feature highlights 🌟:

Update ActiveModel by JSON

[#492] If you want to save user input into the database you can easily convert JSON value into ActiveModel.

#[derive(Clone, Debug, PartialEq, DeriveEntityModel, Serialize, Deserialize)]
#[sea_orm(table_name = "fruit")]
pub struct Model {
#[sea_orm(primary_key)]
#[serde(skip_deserializing)] // Skip deserializing
pub id: i32,
pub name: String,
pub cake_id: Option<i32>,
}

Set the attributes in ActiveModel with set_from_json method.

// A ActiveModel with primary key set
let mut fruit = fruit::ActiveModel {
id: ActiveValue::Set(1),
name: ActiveValue::NotSet,
cake_id: ActiveValue::NotSet,
};

// Note that this method will not alter the primary key values in ActiveModel
fruit.set_from_json(json!({
"id": 8,
"name": "Apple",
"cake_id": 1,
}))?;

assert_eq!(
fruit,
fruit::ActiveModel {
id: ActiveValue::Set(1),
name: ActiveValue::Set("Apple".to_owned()),
cake_id: ActiveValue::Set(Some(1)),
}
);

Create a new ActiveModel from JSON value with the from_json method.

let fruit = fruit::ActiveModel::from_json(json!({
"name": "Apple",
}))?;

assert_eq!(
fruit,
fruit::ActiveModel {
id: ActiveValue::NotSet,
name: ActiveValue::Set("Apple".to_owned()),
cake_id: ActiveValue::NotSet,
}
);
Proposed by:

qltk
Contributed by:

Billy Chan

Support time crate in Model

[#602] You can define datetime column in Model with time crate. You can migrate your Model originally defined in chrono to time crate.

Model defined in chrono crate.

use sea_orm::entity::prelude::*;

#[derive(Clone, Debug, PartialEq, DeriveEntityModel)]
#[sea_orm(table_name = "transaction_log")]
pub struct Model {
#[sea_orm(primary_key)]
pub id: i32,
pub date: Date, // chrono::NaiveDate
pub time: Time, // chrono::NaiveTime
pub date_time: DateTime, // chrono::NaiveDateTime
pub date_time_tz: DateTimeWithTimeZone, // chrono::DateTime<chrono::FixedOffset>
}

#[derive(Copy, Clone, Debug, EnumIter, DeriveRelation)]
pub enum Relation {}

impl ActiveModelBehavior for ActiveModel {}

Model defined in time crate.

use sea_orm::entity::prelude::*;

#[derive(Clone, Debug, PartialEq, DeriveEntityModel)]
#[sea_orm(table_name = "transaction_log")]
pub struct Model {
#[sea_orm(primary_key)]
pub id: i32,
pub date: TimeDate, // time::Date
pub time: TimeTime, // time::Time
pub date_time: TimeDateTime, // time::PrimitiveDateTime
pub date_time_tz: TimeDateTimeWithTimeZone, // time::OffsetDateTime
}

#[derive(Copy, Clone, Debug, EnumIter, DeriveRelation)]
pub enum Relation {}

impl ActiveModelBehavior for ActiveModel {}
Proposed by:

Tom Hacohen
Contributed by:

Billy Chan

Delete by Primary Key

[#590] Instead of selecting Model from the database then deleting it. You could also delete a row from database directly by its primary key.

let res: DeleteResult = Fruit::delete_by_id(38).exec(db).await?;
assert_eq!(res.rows_affected, 1);
Proposed by:

Shouvik Ghosh
Contributed by:

Zhenwei Guo

Paginate Results from Raw Query

[#617] You can paginate SelectorRaw and fetch Model in batch.

let mut cake_pages = cake::Entity::find()
.from_raw_sql(Statement::from_sql_and_values(
DbBackend::Postgres,
r#"SELECT "cake"."id", "cake"."name" FROM "cake" WHERE "id" = $1"#,
vec![1.into()],
))
.paginate(db, 50);

while let Some(cakes) = cake_pages.fetch_and_next().await? {
// Do something on cakes: Vec<cake::Model>
}
Proposed by:

Bastian
Contributed by:

shinbunbun

Create Database Index

[#593] To create indexes in database instead of writing IndexCreateStatement manually, you can derive it from Entity using Schema::create_index_from_entity.

use sea_orm::{sea_query, tests_cfg::*, Schema};

let builder = db.get_database_backend();
let schema = Schema::new(builder);

let stmts = schema.create_index_from_entity(indexes::Entity);
assert_eq!(stmts.len(), 2);

let idx = sea_query::Index::create()
.name("idx-indexes-index1_attr")
.table(indexes::Entity)
.col(indexes::Column::Index1Attr)
.to_owned();
assert_eq!(builder.build(&stmts[0]), builder.build(&idx));

let idx = sea_query::Index::create()
.name("idx-indexes-index2_attr")
.table(indexes::Entity)
.col(indexes::Column::Index2Attr)
.to_owned();
assert_eq!(builder.build(&stmts[1]), builder.build(&idx));
Proposed by:

Jochen Görtler
Contributed by:

Nick Burrett

Our GitHub Sponsor profile is up! If you feel generous, a small donation will be greatly appreciated.

A big shout out to our sponsors 😇:

Émile Fugulin
Zachary Vander Velden
Dean Sheather
Shane Sveller
Sakti Dwi Cahyono
Unnamed Sponsor

Community

SeaQL is a community driven project. We welcome you to participate, contribute and together build for Rust's future.

Here is the roadmap for SeaORM 0.8.x.

GSoC 2022

We are super excited to be selected as a Google Summer of Code 2022 mentor organization. Prospective contributors, please visit our GSoC 2022 Organization Profile!

· 2 min read
SeaQL Team

GSoC 2022 Organization Profile

We are super excited to be selected as a Google Summer of Code 2022 mentor organization. Thank you everyone in the SeaQL community for your support and adoption!

In 2020, when we were developing systems in Rust, we noticed a missing piece in the ecosystem: an ORM that integrates well with the Rust async ecosystem. With that in mind, we designed SeaORM to have a familiar API that welcomes developers from node.js, Go, Python, PHP, Ruby and your favourite language.

The first piece of tool we released is SeaQuery, a query builder with a fluent API. It has a simplified AST that reflects SQL syntax. It frees you from stitching strings together in case you needed to construct SQL dynamically and safely, with the advantages of Rust typings.

The second piece of tool is SeaSchema, a schema manager that allows you to discover and manipulate database schema. The type definition of the schema is database-specific and thus reflecting the features of MySQL, Postgres and SQLite tightly.

The third piece of tool is SeaORM, an Object Relational Mapper for building web services in Rust, whether it's REST, gRPC or GraphQL. We have "async & dynamic" in mind, so developers from dynamic languages can feel right at home.

But why stops at three?

This is just the foundation to setup Rust to be the best language for data engineering, and we have many more ideas on our idea list!

Your participation is what makes us unique; your adoption is what drives us forward.

Thank you everyone for all your karma, it's the Rust community here that makes it possible. We will gladly take the mission to nurture open source developers during GSoC.

Prospective contributors, stay in touch with us. We also welcome any discussion on the future of the Rust ecosystem and the SeaQL organization.

GSoC 2022 Idea List

· 5 min read
SeaQL Team

🎉 We are pleased to release SeaORM 0.6.0 today! Here are some feature highlights 🌟:

Migration

[#335] Version control you database schema with migrations written in SeaQuery or in raw SQL. View migration docs to learn more.

  1. Setup the migration directory by executing sea-orm-cli migrate init.

    migration
    ├── Cargo.toml
    ├── README.md
    └── src
    ├── lib.rs
    ├── m20220101_000001_create_table.rs
    └── main.rs
  2. Defines the migration in SeaQuery.

    use sea_schema::migration::prelude::*;

    pub struct Migration;

    impl MigrationName for Migration {
    fn name(&self) -> &str {
    "m20220101_000001_create_table"
    }
    }

    #[async_trait::async_trait]
    impl MigrationTrait for Migration {
    async fn up(&self, manager: &SchemaManager) -> Result<(), DbErr> {
    manager
    .create_table( ... )
    .await
    }

    async fn down(&self, manager: &SchemaManager) -> Result<(), DbErr> {
    manager
    .drop_table( ... )
    .await
    }
    }
  3. Apply the migration by executing sea-orm-cli migrate.

    $ sea-orm-cli migrate
    Applying all pending migrations
    Applying migration 'm20220101_000001_create_table'
    Migration 'm20220101_000001_create_table' has been applied
Designed by:

Chris Tsang
Contributed by:

Billy Chan

Support DateTimeUtc & DateTimeLocal in Model

[#489] Represents database's timestamp column in Model with attribute of type DateTimeLocal (chrono::DateTime<Local>) or DateTimeUtc (chrono::DateTime<Utc>).

#[derive(Clone, Debug, PartialEq, DeriveEntityModel)]
#[sea_orm(table_name = "satellite")]
pub struct Model {
#[sea_orm(primary_key)]
pub id: i32,
pub satellite_name: String,
pub launch_date: DateTimeUtc,
pub deployment_date: DateTimeLocal,
}
Proposed by:

lz1998

Chris Tsang
Contributed by:

Charles·Chege

Billy Chan

Mock Join Results

[#455] Constructs mock results of related model with tuple of model.

let db = MockDatabase::new(DbBackend::Postgres)
// Mocking result of cake with its related fruit
.append_query_results(vec![vec![(
cake::Model {
id: 1,
name: "Apple Cake".to_owned(),
},
fruit::Model {
id: 2,
name: "Apple".to_owned(),
cake_id: Some(1),
},
)]])
.into_connection();

assert_eq!(
cake::Entity::find()
.find_also_related(fruit::Entity)
.all(&db)
.await?,
vec![(
cake::Model {
id: 1,
name: "Apple Cake".to_owned(),
},
Some(fruit::Model {
id: 2,
name: "Apple".to_owned(),
cake_id: Some(1),
})
)]
);
Proposed by:

Bastian
Contributed by:

Bastian

Billy Chan

Support Max Connection Lifetime Option

[#493] You can set the maximum lifetime of individual connection with the max_lifetime method.

let mut opt = ConnectOptions::new("protocol://username:password@host/database".to_owned());
opt.max_lifetime(Duration::from_secs(8))
.max_connections(100)
.min_connections(5)
.connect_timeout(Duration::from_secs(8))
.idle_timeout(Duration::from_secs(8))
.sqlx_logging(true);

let db = Database::connect(opt).await?;
Proposed by:

Émile Fugulin
Contributed by:

Billy Chan

SeaORM CLI & Codegen Updates

  • [#433] Generates the column_name macro attribute for column which is not named in snake case
  • [#335] Introduces migration subcommands sea-orm-cli migrate
Proposed by:

Gabriel Paulucci
Contributed by:

Billy Chan

Our GitHub Sponsor profile is up! If you feel generous, a small donation will be greatly appreciated.

A big shout out to our sponsors 😇:

Émile Fugulin
Zachary Vander Velden
Shane Sveller
Sakti Dwi Cahyono
Unnamed Sponsor

Community

SeaQL is a community driven project. We welcome you to participate, contribute and together build for Rust's future.

Here is the roadmap for SeaORM 0.7.x.

· 4 min read
SeaQL Team

🎉 We are pleased to release SeaORM 0.5.0 today! Here are some feature highlights 🌟:

Insert and Update Return Model

[#339] As asked and requested by many of our community members. You can now get the refreshed Model after insert or update operations. If you want to mutate the model and save it back to the database you can convert it into ActiveModel with the method into_active_model.

Breaking Changes:

  • ActiveModel::insert and ActiveModel::update return Model instead of ActiveModel
  • Method ActiveModelBehavior::after_save takes Model as input instead of ActiveModel
// Construct a `ActiveModel`
let active_model = ActiveModel {
name: Set("Classic Vanilla Cake".to_owned()),
..Default::default()
};
// Do insert
let cake: Model = active_model.insert(db).await?;
assert_eq!(
cake,
Model {
id: 1,
name: "Classic Vanilla Cake".to_owned(),
}
);

// Covert `Model` into `ActiveModel`
let mut active_model = cake.into_active_model();
// Change the name of cake
active_model.name = Set("Chocolate Cake".to_owned());
// Do update
let cake: Model = active_model.update(db).await?;
assert_eq!(
cake,
Model {
id: 1,
name: "Chocolate Cake".to_owned(),
}
);

// Do delete
cake.delete(db).await?;
Proposed by:

Julien Nicoulaud

Edgar
Contributed by:

Billy Chan

ActiveValue Revamped

[#340] The ActiveValue is now defined as an enum instead of a struct. The public API of it remains unchanged, except Unset was deprecated and ActiveValue::NotSet should be used instead.

Breaking Changes:

  • Rename method sea_orm::unchanged_active_value_not_intended_for_public_use to sea_orm::Unchanged
  • Rename method ActiveValue::unset to ActiveValue::not_set
  • Rename method ActiveValue::is_unset to ActiveValue::is_not_set
  • PartialEq of ActiveValue will also check the equality of state instead of just checking the equality of value
/// Defines a stateful value used in ActiveModel.
pub enum ActiveValue<V>
where
V: Into<Value>,
{
/// A defined [Value] actively being set
Set(V),
/// A defined [Value] remain unchanged
Unchanged(V),
/// An undefined [Value]
NotSet,
}
Designed by:

Chris Tsang
Contributed by:

Billy Chan

SeaORM CLI & Codegen Updates

Install latest version of sea-orm-cli:

cargo install sea-orm-cli

Updates related to entity files generation (cargo generate entity):

  • [#348] Discovers and defines PostgreSQL enums
  • [#386] Supports SQLite database, you can generate entity files from all supported databases including MySQL, PostgreSQL and SQLite
Proposed by:

Zachary Vander Velden
Contributed by:

Charles·Chege

Billy Chan

Tracing

[#373] You can trace the query executed by SeaORM with debug-print feature enabled and tracing-subscriber up and running.

pub async fn main() {
tracing_subscriber::fmt()
.with_max_level(tracing::Level::DEBUG)
.with_test_writer()
.init();

// ...
}

Contributed by:

Marco Napetti

Our GitHub Sponsor profile is up! If you feel generous, a small donation will be greatly appreciated.

A big shout out to our sponsors 😇:

Sakti Dwi Cahyono
Shane Sveller
Zachary Vander Velden
Praveen Perera
Unnamed Sponsor

Community

SeaQL is a community driven project. We welcome you to participate, contribute and together build for Rust's future.

Here is the roadmap for SeaORM 0.6.x.

· 4 min read
SeaQL Team

🎉 We are pleased to release SeaORM 0.4.0 today! Here are some feature highlights 🌟:

Rust Edition 2021

[#273] Upgrading SeaORM to Rust Edition 2021 🦀❤🐚!

Contributed by:

Carter Snook

Enumeration

[#252] You can now use Rust enums in model where the values are mapped to a database string, integer or native enum. Learn more here.

#[derive(Debug, Clone, PartialEq, DeriveEntityModel)]
#[sea_orm(table_name = "active_enum")]
pub struct Model {
#[sea_orm(primary_key)]
pub id: i32,
// Use our custom enum in a model
pub category: Option<Category>,
pub color: Option<Color>,
pub tea: Option<Tea>,
}

#[derive(Debug, Clone, PartialEq, EnumIter, DeriveActiveEnum)]
#[sea_orm(rs_type = "String", db_type = "String(Some(1))")]
// An enum serialized into database as a string value
pub enum Category {
#[sea_orm(string_value = "B")]
Big,
#[sea_orm(string_value = "S")]
Small,
}

#[derive(Debug, Clone, PartialEq, EnumIter, DeriveActiveEnum)]
#[sea_orm(rs_type = "i32", db_type = "Integer")]
// An enum serialized into database as an integer value
pub enum Color {
#[sea_orm(num_value = 0)]
Black,
#[sea_orm(num_value = 1)]
White,
}

#[derive(Debug, Clone, PartialEq, EnumIter, DeriveActiveEnum)]
#[sea_orm(rs_type = "String", db_type = "Enum", enum_name = "tea")]
// An enum serialized into database as a database native enum
pub enum Tea {
#[sea_orm(string_value = "EverydayTea")]
EverydayTea,
#[sea_orm(string_value = "BreakfastTea")]
BreakfastTea,
}
Designed by:

Chris Tsang
Contributed by:

Billy Chan

Supports RETURNING Clause on PostgreSQL

[#183] When performing insert or update operation on ActiveModel against PostgreSQL, RETURNING clause will be used to perform select in a single SQL statement.

// For PostgreSQL
cake::ActiveModel {
name: Set("Apple Pie".to_owned()),
..Default::default()
}
.insert(&postgres_db)
.await?;

assert_eq!(
postgres_db.into_transaction_log(),
vec![Transaction::from_sql_and_values(
DbBackend::Postgres,
r#"INSERT INTO "cake" ("name") VALUES ($1) RETURNING "id", "name""#,
vec!["Apple Pie".into()]
)]);
// For MySQL & SQLite
cake::ActiveModel {
name: Set("Apple Pie".to_owned()),
..Default::default()
}
.insert(&other_db)
.await?;

assert_eq!(
other_db.into_transaction_log(),
vec![
Transaction::from_sql_and_values(
DbBackend::MySql,
r#"INSERT INTO `cake` (`name`) VALUES (?)"#,
vec!["Apple Pie".into()]
),
Transaction::from_sql_and_values(
DbBackend::MySql,
r#"SELECT `cake`.`id`, `cake`.`name` FROM `cake` WHERE `cake`.`id` = ? LIMIT ?"#,
vec![15.into(), 1u64.into()]
)]);
Proposed by:

Marlon Brandão de Sousa
Contributed by:

Billy Chan

Axum Integration Example

[#297] Added Axum integration example. More examples wanted!

Contributed by:

Yoshiera

Our GitHub Sponsor profile is up! If you feel generous, a small donation will be greatly appreciated.

A big shout out to our first sponsors 😇:

Shane Sveller
Zachary Vander Velden
Unnamed Sponsor

Community

SeaQL is a community driven project. We welcome you to participate, contribute and together build for Rust's future.

Here is the roadmap for SeaORM 0.5.x.

· 4 min read
SeaQL Team

🎉 We are pleased to release SeaORM 0.3.0 today! Here are some feature highlights 🌟:

Transaction

[#222] Use database transaction to perform atomic operations

Two transaction APIs are provided:

  • closure style. Will be committed on Ok and rollback on Err.

    // <Fn, A, B> -> Result<A, B>
    db.transaction::<_, _, DbErr>(|txn| {
    Box::pin(async move {
    bakery::ActiveModel {
    name: Set("SeaSide Bakery".to_owned()),
    ..Default::default()
    }
    .save(txn)
    .await?;

    bakery::ActiveModel {
    name: Set("Top Bakery".to_owned()),
    ..Default::default()
    }
    .save(txn)
    .await?;

    Ok(())
    })
    })
    .await;
  • RAII style. begin the transaction followed by commit or rollback. If txn goes out of scope, it'd automatically rollback.

    let txn = db.begin().await?;

    // do something with txn

    txn.commit().await?;

Contributed by:

Marco Napetti
Chris Tsang

Streaming

[#222] Use async stream on any Select for memory efficiency.

let mut stream = Fruit::find().stream(&db).await?;

while let Some(item) = stream.try_next().await? {
let item: fruit::ActiveModel = item.into();
// do something with item
}

Contributed by:

Marco Napetti

API for custom logic on save & delete

[#210] We redefined the trait methods of ActiveModelBehavior. You can now perform custom validation before and after insert, update, save, delete actions. You can abort an action even after it is done, if you are inside a transaction.

impl ActiveModelBehavior for ActiveModel {
// Override default values
fn new() -> Self {
Self {
serial: Set(Uuid::new_v4()),
..ActiveModelTrait::default()
}
}

// Triggered before insert / update
fn before_save(self, insert: bool) -> Result<Self, DbErr> {
if self.price.as_ref() <= &0.0 {
Err(DbErr::Custom(format!(
"[before_save] Invalid Price, insert: {}",
insert
)))
} else {
Ok(self)
}
}

// Triggered after insert / update
fn after_save(self, insert: bool) -> Result<Self, DbErr> {
Ok(self)
}

// Triggered before delete
fn before_delete(self) -> Result<Self, DbErr> {
Ok(self)
}

// Triggered after delete
fn after_delete(self) -> Result<Self, DbErr> {
Ok(self)
}
}

Contributed by:

Muhannad
Billy Chan

Generate Entity Models That Derive Serialize / Deserialize

[#237] You can use sea-orm-cli to generate entity models that also derive serde Serialize / Deserialize traits.

//! SeaORM Entity. Generated by sea-orm-codegen 0.3.0

use sea_orm::entity::prelude:: * ;
use serde::{Deserialize, Serialize};

#[derive(Clone, Debug, PartialEq, DeriveEntityModel, Serialize, Deserialize)]
#[sea_orm(table_name = "cake")]
pub struct Model {
#[sea_orm(primary_key)]
pub id: i32,
#[sea_orm(column_type = "Text", nullable)]
pub name: Option<String> ,
}

// ...

Contributed by:

Tim Eggert

Introduce DeriveIntoActiveModel macro & IntoActiveValue Trait

[#240] introduced a new derive macro DeriveIntoActiveModel for implementing IntoActiveModel on structs. This is useful when creating your own struct with only partial fields of a model, for example as a form submission in a REST API.

IntoActiveValue trait allows converting Option<T> into ActiveValue<T> with the method into_active_value.

// Define regular model as usual
#[derive(Clone, Debug, PartialEq, DeriveModel, DeriveActiveModel)]
#[sea_orm(table_name = "users")]
pub struct Model {
pub id: Uuid,
pub created_at: DateTimeWithTimeZone,
pub updated_at: DateTimeWithTimeZone,
pub email: String,
pub password: String,
pub full_name: Option<String>,
pub phone: Option<String>,
}

// Create a new struct with some fields omitted
#[derive(DeriveIntoActiveModel)]
pub struct NewUser {
// id, created_at and updated_at are omitted from this struct,
// and will always be `ActiveValue::unset`
pub email: String,
pub password: String,
// Full name is usually optional, but it can be required here
pub full_name: String,
// Option implements `IntoActiveValue`, and when `None` will be `unset`
pub phone: Option<String>,
}

#[derive(DeriveIntoActiveModel)]
pub struct UpdateUser {
// Option<Option<T>> allows for Some(None) to update the column to be NULL
pub phone: Option<Option<String>>,
}

Contributed by:

Ari Seyhun

Community

SeaQL is a community driven project. We welcome you to participate, contribute and together build for Rust's future.

Here is the roadmap for SeaORM 0.4.x.

· 2 min read
SeaQL Team

🎉 We are pleased to release SeaORM 0.2.4 today! Some feature highlights:

Better ergonomic when working with custom select list

[#208] Use Select::into_values to quickly select a custom column list and destruct as tuple.

use sea_orm::{entity::*, query::*, tests_cfg::cake, DeriveColumn, EnumIter};

#[derive(Copy, Clone, Debug, EnumIter, DeriveColumn)]
enum QueryAs {
CakeName,
NumOfCakes,
}

let res: Vec<(String, i64)> = cake::Entity::find()
.select_only()
.column_as(cake::Column::Name, QueryAs::CakeName)
.column_as(cake::Column::Id.count(), QueryAs::NumOfCakes)
.group_by(cake::Column::Name)
.into_values::<_, QueryAs>()
.all(&db)
.await?;

assert_eq!(
res,
vec![("Chocolate Forest".to_owned(), 2i64)]
);

Contributed by:

Muhannad

Rename column name & column enum variant

[#209] Rename the column name and enum variant of a model attribute, especially helpful when the column name is a Rust keyword.

mod my_entity {
use sea_orm::entity::prelude::*;

#[derive(Clone, Debug, PartialEq, DeriveEntityModel)]
#[sea_orm(table_name = "my_entity")]
pub struct Model {
#[sea_orm(primary_key, enum_name = "IdentityColumn", column_name = "id")]
pub id: i32,
#[sea_orm(column_name = "type")]
pub type_: String,
}

//...
}

assert_eq!(my_entity::Column::IdentityColumn.to_string().as_str(), "id");
assert_eq!(my_entity::Column::Type.to_string().as_str(), "type");

Contributed by:

Billy Chan

not on a condition tree

[#145] Build a complex condition tree with Condition.

use sea_orm::{entity::*, query::*, tests_cfg::cake, sea_query::Expr, DbBackend};

assert_eq!(
cake::Entity::find()
.filter(
Condition::all()
.add(
Condition::all()
.not()
.add(Expr::val(1).eq(1))
.add(Expr::val(2).eq(2))
)
.add(
Condition::any()
.add(Expr::val(3).eq(3))
.add(Expr::val(4).eq(4))
)
)
.build(DbBackend::Postgres)
.to_string(),
r#"SELECT "cake"."id", "cake"."name" FROM "cake" WHERE (NOT (1 = 1 AND 2 = 2)) AND (3 = 3 OR 4 = 4)"#
);

Contributed by:

nitnelave
6xzo

Community

SeaQL is a community driven project. We welcome you to participate, contribute and together build for Rust's future.

Here is the roadmap for SeaORM 0.3.x.

· 5 min read
Chris Tsang

We are pleased to introduce SeaORM 0.2.2 to the Rust community today. It's our pleasure to have received feedback and contributions from awesome people to SeaQuery and SeaORM since 0.1.0.

Rust is a wonderful language that can be used to build anything. One of the FAQs is "Are We Web Yet?", and if Rocket (or your favourite web framework) is Rust's Rail, then SeaORM is precisely Rust's ActiveRecord.

SeaORM is an async ORM built from the ground up, designed to play well with the async ecosystem, whether it's actix, async-std, tokio or any web framework built on top.

Let's have a quick tour of SeaORM.

Async

Here is how you'd execute multiple queries in parallel:

// execute multiple queries in parallel
let cakes_and_fruits: (Vec<cake::Model>, Vec<fruit::Model>) =
futures::try_join!(Cake::find().all(&db), Fruit::find().all(&db))?;

Dynamic

You can use SeaQuery to build complex queries without 'fighting the ORM':

// build subquery with ease
let cakes_with_filling: Vec<cake::Model> = cake::Entity::find()
.filter(
Condition::any().add(
cake::Column::Id.in_subquery(
Query::select()
.column(cake_filling::Column::CakeId)
.from(cake_filling::Entity)
.to_owned(),
),
),
)
.all(&db)
.await?;

More on SeaQuery

Testable

To write unit tests, you can use our mock interface:

// Setup mock connection
let db = MockDatabase::new(DbBackend::Postgres)
.append_query_results(vec![
vec![
cake::Model {
id: 1,
name: "New York Cheese".to_owned(),
},
],
])
.into_connection();

// Perform your application logic
assert_eq!(
cake::Entity::find().one(&db).await?,
Some(cake::Model {
id: 1,
name: "New York Cheese".to_owned(),
})
);

// Compare it against the expected transaction log
assert_eq!(
db.into_transaction_log(),
vec![
Transaction::from_sql_and_values(
DbBackend::Postgres,
r#"SELECT "cake"."id", "cake"."name" FROM "cake" LIMIT $1"#,
vec![1u64.into()]
),
]
);

More on testing

Service Oriented

Here is an example Rocket handler with pagination:

#[get("/?<page>&<posts_per_page>")]
async fn list(
conn: Connection<Db>,
page: Option<usize>,
per_page: Option<usize>,
) -> Template {
// Set page number and items per page
let page = page.unwrap_or(1);
let per_page = per_page.unwrap_or(10);

// Setup paginator
let paginator = Post::find()
.order_by_asc(post::Column::Id)
.paginate(&conn, per_page);
let num_pages = paginator.num_pages().await.unwrap();

// Fetch paginated posts
let posts = paginator
.fetch_page(page - 1)
.await
.expect("could not retrieve posts");

Template::render(
"index",
context! {
page: page,
per_page: per_page,
posts: posts,
num_pages: num_pages,
},
)
}

Full Rocket example

We are building more examples for other web frameworks too.

People

SeaQL is a community driven project. We welcome you to participate, contribute and together build for Rust's future.

Core Members

Chris Tsang
Billy Chan

Contributors

As a courtesy, here is the list of SeaQL's early contributors (in alphabetic order):

Ari Seyhun
Ayomide Bamidele
Ben Armstead
Bobby Ng
Daniel Lyne
Hirtol
Sylvie Rinner
Marco Napetti
Markus Merklinger
Muhannad
nitnelave
Raphaël Duchaîne
Rémi Kalbe
Sam Samai

· One min read
Chris Tsang

Today we will outline our release plan in the near future.

One of Rust's slogan is Stability Without Stagnation, and SeaQL's take on it, is 'progression without stagnation'.

Before reaching 1.0, we will be releasing every week, incorporating the latest changes and merged pull requests. There will be at most one incompatible release per month, so you will be expecting 0.2 in Sep 2021 and 0.9 in Apr 2022. We will decide by then whether the next release is an incremental 0.10 or a stable 1.0.

After that, a major release will be rolled out every year. So you will probably be expecting a 2.0 in 2023.

All of these is only made possible with a solid infrastructure. While we have a test suite, its coverage will likely never be enough. We urge you to submit test cases to SeaORM if a particular feature is of importance to you.

We hope that a rolling release model will provide momentum to the community and propell us forward in the near future.

· One min read
Chris Tsang

After 8 months of secrecy, SeaORM is now public!

The Rust async ecosystem is definitely thriving, with Tokio announcing Axum a week before.

We are now busy doing the brush ups to head towards our announcement in Sep.

If you stumbled upon us just now, well, hello! We sincerely invite you to be our alpha tester.

· One min read
Chris Tsang

One year ago, when we were writing data processing algorithms in Rust, we needed an async library to interface with a database. Back then, there weren't many choices. So we have to write our own.

December last year, we released SeaQuery, and received welcoming responses from the community. We decided to push the project further and develop a full blown async ORM.

It has been a bumpy ride, as designing an async ORM requires working within and sometimes around Rust's unique type system. After several iterations of experimentation, I think we've attained a balance between static & dynamic and compile-time & run-time that it offers benefits of the Rust language while still be familiar and easy-to-work-with for those who come from other languages.

SeaORM is tentative to be released in Sep 2021 and stabilize in May 2022. We hope that SeaORM will become a go-to choice for working with databases in Rust and that the Rust language will be adopted by more organizations in building applications.

If you are intrigued like I do, please stay in touch and join the community.

Share your thoughts here.