This is the full developer documentation for ldbc. # Start of ldbc documentation # Integration of HikariCP with ldbc ## Introduction HikariCP is one of the most widely used database connection pool libraries in Java. When combined with ldbc, it enables efficient database connection management. ## Adding Dependencies First, add the necessary dependencies to build.sbt: ```scala libraryDependencies ++= Seq( "com.mysql" % "mysql-connector-j" % "8.4.0", "com.zaxxer" % "HikariCP" % "6.2.1" ) ``` ## Basic HikariCP Configuration HikariCP has many configuration options. Here are the main configuration items: ```scala import com.zaxxer.hikari.HikariConfig import com.zaxxer.hikari.HikariDataSource val config = new HikariConfig() config.setJdbcUrl("jdbc:mysql://localhost:3306/database") config.setUsername("username") config.setPassword("password") // Basic connection pool settings config.setMaximumPoolSize(10) // Maximum pool size config.setMinimumIdle(5) // Minimum idle connections config.setConnectionTimeout(30000) // Connection timeout (milliseconds) config.setIdleTimeout(600000) // Idle timeout (milliseconds) val dataSource = new HikariDataSource(config) ``` ## Integration Example with ldbc Here's a basic example of combining HikariCP with ldbc: ```scala import cats.effect.* import com.zaxxer.hikari.HikariDataSource import ldbc.dsl.* import ldbc.dsl.codec.Codec import jdbc.connector.* // Data model definition case class User(id: Int, name: String, email: String) object User: given Codec[User] = Codec.derived[User] object HikariExample extends IOApp.Simple: def run: IO[Unit] = // HikariCP configuration val ds = new HikariDataSource() ds.setJdbcUrl("jdbc:mysql://localhost:3306/mydb") ds.setUsername("user") ds.setPassword("password") // Establishing connection val program = for hikari <- Resource.fromAutoCloseable(IO(ds)) execution <- ExecutionContexts.fixedThreadPool[IO](10) connection <- ConnectionProvider.fromDataSource[IO](hikari, execution).createConnection() yield connection // Query execution program.use { conn => for users <- sql"SELECT * FROM users".query[User].to[List].readOnly(conn) _ <- IO.println(s"Found users: $users") yield () } ``` ## Advanced Configuration Examples HikariCP offers various settings for performance tuning: ```scala val config = new HikariConfig() // Basic settings config.setPoolName("MyPool") // Set pool name config.setAutoCommit(false) // Disable auto-commit // Performance settings config.setMaxLifetime(1800000) // Maximum connection lifetime (30 minutes) config.setValidationTimeout(5000) // Connection validation timeout config.setLeakDetectionThreshold(60000) // Connection leak detection threshold // MySQL-specific settings config.addDataSourceProperty("cachePrepStmts", "true") config.addDataSourceProperty("prepStmtCacheSize", "250") config.addDataSourceProperty("prepStmtCacheSqlLimit", "2048") ``` ## Summary Combining HikariCP with ldbc provides the following benefits: - Efficient connection pooling - Simplified transaction management - Type-safe query execution - Pure functional programming using Cats Effect With proper configuration and error handling, you can achieve stable, high-performance database access. # Integration of Http4s and ldbc This guide explains how to build a web application by combining [Http4s](https://http4s.org/) and ldbc. ## Introduction Http4s is a pure functional HTTP server and client library for Scala. By combining it with ldbc, you can build web applications with type-safe database access. ## Adding Dependencies First, you need to add the following dependencies to your build.sbt: ```scala libraryDependencies ++= Seq( "org.http4s" %% "http4s-dsl" % "0.23.30", "org.http4s" %% "http4s-ember-server" % "0.23.30", "org.http4s" %% "http4s-circe" % "0.23.30", "io.circe" %% "circe-generic" % "0.14.10" ) ``` ## Basic Usage ### 1. Table Definition First, create models and table definitions corresponding to your database tables: ```scala // Model definition case class City( id: Int, name: String, countryCode: String, district: String, population: Int ) // JSON encoder definition object City: given Encoder[City] = Encoder.derived[City] // Table definition class CityTable extends Table[City]("city"): // Set column naming convention (optional) given Naming = Naming.PASCAL // Column definitions def id: Column[Int] = int("ID").unsigned.autoIncrement.primaryKey def name: Column[String] = char(35) def countryCode: Column[String] = char(3).unique def district: Column[String] = char(20) def population: Column[Int] = int() // Mapping definition override def * : Column[City] = (id *: name *: countryCode *: district *: population).to[City] val cityTable = TableQuery[CityTable] ``` ### 2. Database Connection Configuration Configure the database connection: ```scala private def provider = ConnectionProvider .default[IO]("127.0.0.1", 13306, "ldbc", "password", "world") .setSSL(SSL.Trusted) ``` Main options available in connection configuration: - Hostname - Port number - Database name - Username - Password - SSL settings (Trusted, Verify, etc.) ### 3. HTTP Route Definition Define Http4s routes and incorporate ldbc queries: ```scala private def routes(conn: Connection[IO]): HttpRoutes[IO] = HttpRoutes.of[IO] { case GET -> Root / "cities" => for cities <- cityTable.selectAll.query.to[List].readOnly(conn) result <- Ok(cities.asJson) yield result } ``` ### 4. Server Startup Finally, start the Http4s server: ```scala object Main extends ResourceApp.Forever: override def run(args: List[String]): Resource[IO, Unit] = for conn <- provider.createConnection() _ <- EmberServerBuilder .default[IO] .withHttpApp(routes(conn).orNotFound) .build yield () ``` ## Advanced Examples ### Adding Custom Queries Example of implementing searches with specific conditions or complex queries: ```scala private def routes(conn: Connection[IO]): HttpRoutes[IO] = HttpRoutes.of[IO] { case GET -> Root / "cities" / "search" / name => for cities <- cityTable.filter(_.name === name).query.to[List].readOnly(conn) result <- Ok(cities.asJson) yield result case GET -> Root / "cities" / "population" / IntVar(minPopulation) => for cities <- cityTable.filter(_.population >= minPopulation).query.to[List].readOnly(conn) result <- Ok(cities.asJson) yield result } ``` ## Error Handling Example of properly handling database errors: ```scala private def handleDatabaseError[A](action: IO[A]): IO[Response[IO]] = action.attempt.flatMap { case Right(value) => Ok(value.asJson) case Left(error) => InternalServerError(s"Database error: ${error.getMessage}") } private def routes(conn: Connection[IO]): HttpRoutes[IO] = HttpRoutes.of[IO] { case GET -> Root / "cities" => handleDatabaseError { cityTable.selectAll.query.to[List].readOnly(conn) } } ``` ## Summary Combining Http4s and ldbc offers the following advantages: - Type-safe database access - Benefits of pure functional programming - Flexible routing and error handling In actual application development, you can combine these basic patterns to implement more complex functionality. # Observability with OpenTelemetry This page explains how to achieve observability through database query tracing, metrics collection, and logging by integrating ldbc with OpenTelemetry. ## What is OpenTelemetry? [OpenTelemetry](https://opentelemetry.io/) is an open-source framework for implementing application observability. It provides features such as distributed tracing, metrics collection, and logging, enabling an integrated approach to monitoring and analyzing application performance and behavior. By integrating with OpenTelemetry, ldbc can benefit from: - Tracking database query execution time and processing - Measuring query performance - Error detection and diagnosis - System-wide visualization and monitoring ## Required Dependencies To use OpenTelemetry with ldbc, you need to add the following dependencies to your project: ```scala libraryDependencies ++= Seq( // otel4s library (Scala wrapper for OpenTelemetry) "org.typelevel" %% "otel4s-oteljava" % "0.11.2", // OpenTelemetry exporter (for data transmission) "io.opentelemetry" % "opentelemetry-exporter-otlp" % "1.48.0" % Runtime, // Auto-configuration functionality (configuration via environment variables and system properties) "io.opentelemetry" % "opentelemetry-sdk-extension-autoconfigure" % "1.48.0" % Runtime, ) ``` ## OpenTelemetry Configuration in ldbc ldbc provides a `setTracer` method in ConnectionProvider where you can configure the OpenTelemetry Tracer. Basic configuration steps are as follows: 1. Initialize OpenTelemetry 2. Obtain a TracerProvider 3. Configure it in the ConnectionProvider Here's a basic configuration example: ```scala import cats.effect.* import io.opentelemetry.api.GlobalOpenTelemetry import org.typelevel.otel4s.oteljava.OtelJava import ldbc.connector.* import ldbc.dsl.* // Service name (trace identifier) val serviceName = "my-ldbc-app" // Resource configuration val resource: Resource[IO, Connection[IO]] = for // Create otel4s instance from GlobalOpenTelemetry otel <- Resource .eval(IO.delay(GlobalOpenTelemetry.get)) .evalMap(OtelJava.forAsync[IO]) // Get TracerProvider with service name tracer <- Resource.eval(otel.tracerProvider.get(serviceName)) // Configure database connection and set TracerProvider connection <- ConnectionProvider .default[IO]("localhost", 3306, "user", "password", "database") .setSSL(SSL.Trusted) .setTracer(tracer) // Set OpenTelemetry Tracer .createConnection() yield connection // Execute query using the resource val program = resource.use { conn => sql"SELECT * FROM users".query[String].to[List].readOnly(conn) } ``` ## Configuration via Environment Variables OpenTelemetry can be configured using system properties or environment variables. You can customize OpenTelemetry behavior by setting the following system properties when starting your application: ```shell java -Dotel.java.global-autoconfigure.enabled=true \ -Dotel.service.name=ldbc-app \ -Dotel.traces.exporter=otlp \ -Dotel.metrics.exporter=none \ -Dotel.exporter.otlp.endpoint=http://localhost:4317 \ -jar your-application.jar ``` For SBT execution, you can configure it as follows: ```scala javaOptions ++= Seq( "-Dotel.java.global-autoconfigure.enabled=true", "-Dotel.service.name=ldbc-app", "-Dotel.traces.exporter=otlp", "-Dotel.metrics.exporter=none" ) ``` ## Practical Example: ldbc Application with OpenTelemetry Here's a complete example of setting up an observability environment using Docker Compose with Jaeger and Prometheus, and sending trace data from an ldbc application. ### Project Structure Assume the following project structure: ``` otel-example/ ├── src/ │ └── main/ │ └── scala/ │ └── Main.scala ├── database/ │ └── xxx.sql ├── dependencies/ │ ├── jaeger/ │ │ └── jaeger-ui.json │ ├── prometheus/ │ │ └── prometheus.yml │ └── grafana/ │ └── datasources.yaml └── docker-compose.yaml ``` ### Main.scala Here's the main application code: ```scala import cats.effect.* import io.opentelemetry.api.GlobalOpenTelemetry import org.typelevel.otel4s.oteljava.OtelJava import ldbc.connector.* import ldbc.dsl.* object Main extends IOApp.Simple: private val serviceName = "ldbc-otel-example" private def resource: Resource[IO, Connection[IO]] = for otel <- Resource .eval(IO.delay(GlobalOpenTelemetry.get)) .evalMap(OtelJava.forAsync[IO]) tracer <- Resource.eval(otel.tracerProvider.get(serviceName)) connection <- ConnectionProvider .default[IO]("127.0.0.1", 13307, "ldbc", "password", "world") .setSSL(SSL.Trusted) .setTracer(tracer) .createConnection() yield connection override def run: IO[Unit] = resource.use { conn => sql"SELECT name FROM city".query[String].to[List].readOnly(conn).flatMap { cities => IO.println(cities) } } ``` ### docker-compose.yaml Use the following Docker Compose configuration for the observability environment: ```yaml services: database: image: mysql:8.0.41 container_name: ldbc-otel-example ports: - 13307:3306 networks: - static-network volumes: - ./database:/docker-entrypoint-initdb.d - ./database/my.cnf:/etc/database/conf.d/my.cn environment: MYSQL_USER: 'ldbc' MYSQL_PASSWORD: 'password' MYSQL_ROOT_PASSWORD: 'root' TZ: 'Asia/Tokyo' healthcheck: test: [ "CMD", "mysqladmin", "ping", "-h", "localhost" ] timeout: 20s retries: 10 prometheus: image: prom/prometheus:latest volumes: - "./dependencies/prometheus/prometheus.yml:/etc/prometheus/prometheus.yml" ports: - "9090:9090" networks: - static-network jaeger: image: jaegertracing/all-in-one:latest volumes: - "./dependencies/jaeger/jaeger-ui.json:/etc/jaeger/jaeger-ui.json" command: --query.ui-config /etc/jaeger/jaeger-ui.json environment: - METRICS_STORAGE_TYPE=prometheus - PROMETHEUS_SERVER_URL=http://prometheus:9090 - COLLECTOR_OTLP_ENABLED=true ports: - "16685:16685" # GRPC - "16686:16686" # UI - "4317:4317" # OTLP gRPC receiver - "4318:4318" # OTLP http receiver networks: - static-network grafana: image: grafana/grafana-oss restart: unless-stopped volumes: - "./dependencies/grafana/datasources.yaml:/etc/grafana/provisioning/datasources/datasources.yaml" ports: - "3000:3000" networks: - static-network depends_on: jaeger: condition: service_started networks: static-network: name: static-network ``` ### Execution Steps 1. First, start the observability environment with Docker Compose: ```bash cd otel-example docker-compose up -d ``` 2. Then, run the application with the following Java options: ```bash sbt -Dotel.java.global-autoconfigure.enabled=true \ -Dotel.service.name=ldbc-otel-example \ -Dotel.metrics.exporter=none \ run ``` 3. After running the application, access the Jaeger UI ([http://localhost:16686](http://localhost:16686)) to view traces: ![Jaeger UI](../../../img/jaeger_ui.png) You can also check traces in the Grafana UI ([http://localhost:3000](http://localhost:3000)): ![Grafana UI](../../../img/grafana_ui.png) ## Trace Details ldbc's OpenTelemetry integration records the following information in traces: - **Query execution time**: Time taken for database queries - **SQL statements**: Content of executed SQL queries - **Query parameters**: Parameter values passed to queries (safely recorded) - **Connection information**: Server and database name connection details - **Error information**: Detailed information when errors occur ## Adding Custom Spans For more detailed tracking, you can add custom spans. Here's an example of wrapping query execution with a custom span: ```scala resource.use { conn => tracer.span("custom-database-operation").use { span => // Add attributes to span span.setAttribute("db.operation", "select-cities") // Execute query sql"SELECT name FROM city".query[String].to[List].readOnly(conn).flatMap { cities => // Add attributes about result data span.setAttribute("result.count", cities.size) IO.println(cities) } } } ``` ## Trace Customization Options ldbc's OpenTelemetry integration allows for additional customization such as: - **Sampling rate configuration**: Trace only a portion of queries instead of all - **Attribute filtering**: Exclude sensitive information from traces - **Custom trace exporters**: Send to tracing systems other than Jaeger These settings can be configured through OpenTelemetry configuration files or system properties. ## Summary Combining ldbc with OpenTelemetry enables detailed visualization and monitoring of database operations. Using the methods introduced in this guide makes it easier to identify application performance issues, discover bottlenecks, and perform troubleshooting. In production environments, you may need to adjust settings according to security requirements and environment specifics. We recommend referring to the official OpenTelemetry documentation to find the optimal configuration. # Examples This section shows several examples using ldbc. ## Table of Contents @:navigationTree { entries = [ { target = "/en/examples", depth = 2 } ] } # ldbc (Lepus Database Connectivity) Please note that **ldbc** is pre-1.0 software and is still under active development. Newer versions may no longer be binary compatible with earlier versions. ldbc is a library for building pure functional JDBC layers by [Cats Effect 3](https://typelevel.org/cats-effect/) and [Scala 3](https://github.com/scala/scala3). ldbc is a [Typelevel](http://typelevel.org/) project. It embraces pure, unconventional, functional programming as described in Scala's [Code of Conduct](http://scala-lang.org/conduct.html) and is meant to provide a safe and friendly environment for teaching, learning, and contributing. ## Introduction Most of our application development involves the use of databases. One way to access databases in Scala is to use JDBC, and there are several libraries in Scala that wrap this JDBC. - Functional DSLs (Slick, quill, zio-sql) - SQL string interpolators (Anorm, doobie) ldbc is also a library that wraps JDBC, combining aspects of both approaches. It's a Scala 3 library that provides a type-safe and refactorable SQL interface, capable of expressing SQL expressions on MySQL databases. Unlike other libraries, ldbc also offers its own connector built in Scala. Scala currently supports multiple platforms: JVM, JS, and Native. However, libraries using JDBC can only run in JVM environments. Therefore, ldbc is being developed to provide a connector written in Scala that supports the MySQL protocol, allowing it to work across different platforms. By using ldbc, you can access databases across platforms while leveraging Scala's type safety and the benefits of functional programming. ### Target Audience This documentation is intended for developers who use ldbc, a library for database access using the Scala programming language. ldbc is designed for those interested in typed pure functional programming. If you're not a Cats user or are unfamiliar with functional I/O and monadic Cats Effect, you may need to proceed at a slower pace. That said, if you find yourself confused or frustrated by this documentation or the ldbc API, please open an issue and ask for help. Both the library and documentation are young and rapidly evolving, so some lack of clarity is inevitable. This document will therefore be continuously updated to address issues and omissions. ## Quick Start The current version is **0.3.0-beta11**, compatible with **Scala 3.3.5**. ```scala libraryDependencies ++= Seq( // Start with this one "io.github.takapi327" %% "ldbc-dsl" % "0.3.0-beta11", // Choose the connector you want to use "io.github.takapi327" %% "jdbc-connector" % "0.3.0-beta11", // Java connector (supported platform: JVM) "io.github.takapi327" %% "ldbc-connector" % "0.3.0-beta11", // Scala connector (supported platforms: JVM, JS, Native) // And add these as needed "io.github.takapi327" %% "ldbc-query-builder" % "0.3.0-beta11", // Type-safe query building "io.github.takapi327" %% "ldbc-schema" % "0.3.0-beta11", // Database schema construction ) ``` ## TODO - JSON data type support - SET data type support - Geometry data type support - CHECK constraint support - Support for databases other than MySQL - Streaming support - ZIO module support - Test kit - etc... # Migration Notes (from 0.2.x to 0.3.x) ## Package **Change package name** | 0.2.x | 0.3.x | |-----------|-------------| | ldbc-core | ldbc-schema | **New Packages** Three new packages have been added. | Module / Platform | JVM | Scala Native | Scala.js | |-------------------|:---:|:------------:|:--------:| | `ldbc-connector` | ✅ | ✅ | ✅ | | `jdbc-connector` | ✅ | ❌ | ❌ | | `ldbc-statement` | ✅ | ✅ | ✅ | **All Packages** | Module / Platform | JVM | Scala Native | Scala.js | |----------------------|:---:|:------------:|:--------:| | `ldbc-sql` | ✅ | ✅ | ✅ | | `ldbc-connector` | ✅ | ✅ | ✅ | | `jdbc-connector` | ✅ | ❌ | ❌ | | `ldbc-dsl` | ✅ | ✅ | ✅ | | `ldbc-statement` | ✅ | ✅ | ✅ | | `ldbc-query-builder` | ✅ | ✅ | ✅ | | `ldbc-schema` | ✅ | ✅ | ✅ | | `ldbc-schemaSpy` | ✅ | ❌ | ❌ | | `ldbc-codegen` | ✅ | ✅ | ✅ | | `ldbc-hikari` | ✅ | ❌ | ❌ | | `ldbc-plugin` | ✅ | ❌ | ❌ | ## Functional change ### Connector switching function Support for jdbc and ldbc connection switching has been added to the Scala MySQL connector. This change gives developers the flexibility to choose database connections using either the jdbc or ldbc libraries depending on their project requirements. This allows developers to take advantage of the features of the different libraries, increasing flexibility in connection setup and operation. #### How to change First, common dependencies are set up. ```scala 3 libraryDependencies += "io.github.takapi327" %% "ldbc-dsl" % "0.3.0-beta11" ``` In cross-platform projects (JVM, JS, native) ```scala 3 libraryDependencies += "io.github.takapi327" %%% "ldbc-dsl" % "0.3.0-beta11" ``` The dependency package used depends on whether the database connection is made through a connector using the Java API or through a connector provided by ldbc. **Use of JDBC connectors** ```scala 3 libraryDependencies += "io.github.takapi327" %% "jdbc-connector" % "0.3.0-beta11" ``` **Using the ldbc connector** ```scala 3 libraryDependencies += "io.github.takapi327" %% "ldbc-connector" % "0.3.0-beta11" ``` In cross-platform projects (JVM, JS, native) ```scala 3 libraryDependencies += "io.github.takapi327" %%% "ldbc-connector" % "0.3.0-beta11" ``` #### Usage rules **Use of JDBC connectors** ```scala 3 import jdbc.connector.* val ds = new com.mysql.cj.jdbc.MysqlDataSource() ds.setServerName("127.0.0.1") ds.setPortNumber(13306) ds.setDatabaseName("world") ds.setUser("ldbc") ds.setPassword("password") val provider = ConnectionProvider.fromDataSource(ex, ExecutionContexts.synchronous) ``` **Using the ldbc connector** ```scala 3 import ldbc.connector.* val provider = ConnectionProvider .default[IO]("127.0.0.1", 3306, "ldbc", "password", "ldbc") .setSSL(SSL.Trusted) ``` The process of connecting to the database can be done using connections established by each method. ```scala 3 val result: IO[(List[Int], Option[Int], Int)] = provider.use { conn => (for result1 <- sql"SELECT 1".query[Int].to[List] result2 <- sql"SELECT 2".query[Int].to[Option] result3 <- sql"SELECT 3".query[Int].unsafe yield (result1, result2, result3)).readOnly(conn) } ``` ## Destructive change ### Extension of plain query construction The determination of the type to be retrieved by the database connection method using plain query had been specifying the type to be retrieved and its format (list or option) all at once. This modification changes this and makes the internal logic common by separating the specification of the type to be retrieved and its format. This makes the plain query syntax more similar to doobie, and should allow doobie users to use it without confusion. **before** ```scala 3 sql"SELECT id, name, age FROM user".toList[(Long, String, Int)].readOnly(connection) sql"SELECT id, name, age FROM user WHERE id = ${1L}".headOption[User].readOnly(connection) ``` **after** ```scala 3 sql"SELECT id, name, age FROM user".query[(Long, String, Int)].to[List].readOnly(connection) sql"SELECT id, name, age FROM user WHERE id = ${1L}".query[User].to[Option].readOnly(connection) ``` ### AUTO INCREMENT value acquisition method renaming The API `updateReturningAutoGeneratedKey`, which converts values generated by an AUTO INCREMENT column in the update API, has been renamed to `returning`. This is a feature of MySQL, which returns the value generated by AUTO INCREMENT when inserting data, but other RDBs behave differently and may return values other than those generated by AUTO INCREMENT. API names were changed early on to make the limited API names more extensible for future extensions. **before** ```scala 3 sql"INSERT INTO `table`(`id`, `c1`) VALUES ($None, ${ "column 1" })".updateReturningAutoGeneratedKey[Long] ``` **after** ```scala 3 sql"INSERT INTO `table`(`id`, `c1`) VALUES ($None, ${ "column 1" })".returning[Long] ``` ### How to build a query builder Previously, the query builder could not be used without building the table schema. In this update, changes have been made to allow the Query Builder to be used in a simpler manner. **before** First, create a table schema corresponding to the model, ```scala 3 case class User( id: Long, name: String, age: Option[Int], ) val userTable = Table[User]("user")( // CREATE TABLE `user` ( column("id", BIGINT, AUTO_INCREMENT, PRIMARY_KEY), // `id` BIGINT NOT NULL AUTO_INCREMENT PRIMARY KEY, column("name", VARCHAR(255)), // `name` VARCHAR(255) NOT NULL, column("age", INT.UNSIGNED.DEFAULT(None)), // `age` INT unsigned DEFAULT NULL ) ``` Next, use the table schema to build the `TableQuery`. ```scala 3 val tableQuery = TableQuery[IO, User](userTable) ``` The last step was query construction. ```scala 3 val result: IO[List[User]] = connection.use { conn => tableQuery.selectAll.toList[User].readOnly(conn) // "SELECT `id`, `name`, `age` FROM user" } ``` **after** This change will allow the model to be built and ```scala 3 import ldbc.dsl.codec.Codec import ldbc.query.builder.Table case class User( id: Long, name: String, age: Option[Int], ) derives Table object User: given Codec[User] = Codec.derived[User] ``` Next, initialize the `Table`. ```scala 3 import ldbc.query.builder.Table val userTable = TableQuery[User] ``` Finally, query construction is performed to make it available. ```scala val result: IO[List[User]] = provider.use { conn => userTable.selectAll.query.to[List].readOnly(conn) // "SELECT `id`, `name`, `age` FROM user" } ``` #### Building a query builder using Schema The previous method of building a Table that mimics a schema has been replaced by building a Table type using the Schema project. In the following, we will look at the construction of a Table type corresponding to the User model. ```scala 3 case class User( id: Long, name: String, age: Option[Int], ) ``` **Before** Previously, it was necessary to create an instance of Table directly; it was mandatory to pass as arguments of Table the corresponding columns in the same order as the properties possessed by the User class, and to set the data types of the columns as well. TableQuery with this table type was implemented using Dynamic, which allows type-safe access, but could not be complemented by the development tools. In addition, this construction method was a bit slower in compile time than class generation. ```scala 3 val userTable = Table[User]("user")( column("id", BIGINT, AUTO_INCREMENT, PRIMARY_KEY), column("name", VARCHAR(255)), column("age", INT.UNSIGNED.DEFAULT(None)), ) ``` **After** In this modification, the creation of the Table type has been changed to a method of creating a class that inherits from Table. In addition, the data type of a column is no longer mandatory and can be set arbitrarily by the implementer. This change to a construction method similar to Slick makes it more familiar to implementors. ```scala 3 class UserTable extends Table[User]("user"): def id: Column[Long] = column[Long]("id") def name: Column[String] = column[String]("name") def age: Column[Option[Int]] = column[Option[Int]]("age") override def * : Column[User] = (id *: name *: age).to[User] ``` The data type of a column can still be set. This setting is used, for example, when generating a schema using this table class. ```scala 3 class UserTable extends Table[User]("user"): def id: Column[Long] = column[Long]("id", BIGINT, AUTO_INCREMENT, PRIMARY_KEY) def name: Column[String] = column[String]("name", VARCHAR(255)) def age: Column[Option[Int]] = column[Option[Int]]("age", INT.UNSIGNED.DEFAULT(None)) override def * : Column[User] = (id *: name *: age).to[User] ``` There is also a column definition method that expresses the data type. The above definition method can be rewritten as follows In this definition method, the column name can be used as a variable name, so there is no need to pass the column name as an argument. ```diff class UserTable extends Table[User]("user"): - def id: Column[Long] = column[Long]("id", BIGINT, AUTO_INCREMENT, PRIMARY_KEY) - def name: Column[String] = column[String]("name", VARCHAR(255)) - def age: Column[Option[Int]] = column[Option[Int]]("age", INT.UNSIGNED.DEFAULT(None)) + def id: Column[Long] = bigint().autoIncrement.primaryKey + def name: Column[String] = varchar(255) + def age: Column[Option[Int]] = int().unsigned.defaultNull override def * : Column[User] = (id *: name *: age).to[User] ``` Column names can be formatted by passing Naming implicitly. The default is CamelCase, but to change it to PascalCase, do the following ```scala 3 class UserTable extends Table[User]("user"): given Naming = Naming.PASCAL def id: Column[Long] = bigint().autoIncrement.primaryKey def name: Column[String] = varchar(255) def age: Column[Option[Int]] = int().unsigned.defaultNull override def * : Column[User] = (id *: name *: age).to[User] ``` If you want to change the format of a specific column, you can define it by passing the column name as an argument. ```scala 3 class UserTable extends Table[User]("user"): def id: Column[Long] = bigint("ID").autoIncrement.primaryKey def name: Column[String] = varchar("NAME", 255) def age: Column[Option[Int]] = int("AGE").unsigned.defaultNull override def * : Column[User] = (id *: name *: age).to[User] ``` ### Support for custom data types When using user-defined data types, custom data types were supported using `ResultSetReader` and `Parameter`. With this update, the method of supporting custom data types using `ResultSetReader` and `Parameter` has changed. #### Encoder Changed from `Parameter` to `Encoder` to dynamically embed in the query string. This eliminates the need for users to write redundant processing to receive the Effect Type, allowing for simpler implementation and use as a parameter for custom data types. ```scala 3 enum Status(val code: Int, val name: String): case Active extends Status(1, "Active") case InActive extends Status(2, "InActive") ``` ```diff -given Parameter[Status] with - override def bind[F[_]]( - statement: PreparedStatement[F], - index: Int, - status: Status - ): F[Unit] = statement.setInt(index, status.code) +given Encoder[Status] = Encoder[Int].contramap(_.code) ``` The `Encoder` encoding process can only return Scala types that can be handled by `PreparedStatement`. Currently, the following types are supported | Scala Type | Methods called in PreparedStatement | |---------------------------|-------------------------------------| | `Boolean` | `setBoolean` | | `Byte` | `setByte` | | `Short` | `setShort` | | `Int` | `setInt` | | `Long` | `setLong` | | `Float` | `setFloat` | | `Double` | `setDouble` | | `BigDecimal` | `setBigDecimal` | | `String` | `setString` | | `Array[Byte]` | `setBytes` | | `java.time.LocalDate` | `setDate` | | `java.time.LocalTime` | `setTime` | | `java.time.LocalDateTime` | `setTimestamp` | | `None` | `setNull` | Encoder can also compose multiple types to create a new type. ```scala 3 val encoder: Encoder[(Int, String)] = Encoder[Int] *: Encoder[String] ``` Composite types can also be converted to arbitrary classes. ```scala 3 case class Status(code: Int, name: String) given Encoder[Status] = (Encoder[Int] *: Encoder[String]).to[Status] ``` #### Decoder Changed the process of getting data from a `ResultSet` from a `ResultSetReader` to a `Decoder`. ```diff -given ResultSetReader[IO, Status] = - ResultSetReader.mapping[IO, Int, Status](code => Status.fromCode(code)) +given Decoder[Status] = Decoder[Int].map(code => Status.fromCode(code)) ``` Decoder can also composite multiple types to create a new type. ```scala 3 val decoder: Decoder[(Int, String)] = Decoder[Int] *: Decoder[String] ``` Composite types can also be converted to arbitrary classes. ```scala 3 case class Status(code: Int, name: String) given Decoder[Status] = (Decoder[Int] *: Decoder[String]).to[Status] ``` ### Introducing Codec `Codec` is a combination of `Encoder` and `Decoder`. ```scala 3 enum Status(val code: Int, val name: String): case Active extends Status(1, "Active") case InActive extends Status(2, "InActive") given Codec[Status] = Codec[Int].imap(Status.fromCode)(_.code) ``` Codec can also composite multiple types to create a new type. ```scala 3 val codec: Codec[(Int, String)] = Codec[Int] *: Codec[String] ``` Composite types can also be converted to arbitrary classes. ```scala 3 case class Status(code: Int, name: String) given Codec[Status] = (Codec[Int] *: Codec[String]).to[Status] ``` Codec is a combination of `Encoder` and `Decoder`, so it can perform the conversion process to each type. ```scala 3 val encoder: Encoder[Status] = Codec[Status].asEncoder val decoder: Decoder[Status] = Codec[Status].asDecoder ``` This change allows users to use `Codec` to combine `Encoder` and `Decoder`. This allows users to convert retrieved records into nested hierarchical data. ```scala case class City(id: Int, name: String, countryCode: String) case class Country(code: String, name: String) case class CityWithCountry(city: City, country: Country) sql"SELECT city.Id, city.Name, city.CountryCode, country.Code, country.Name FROM city JOIN country ON city.CountryCode = country.Code".query[CityWithCountry] ``` Starting with Codec, `Encoder` and `Decoder` are resolved implicitly, so users do not need to specify these types explicitly. However, if there are many properties in the model, implicit lookups may fail. ```shell [error] |Implicit search problem too large. [error] |an implicit search was terminated with failure after trying 100000 expressions. [error] |The root candidate for the search was: [error] | [error] | given instance given_Decoder_P in object Decoder for ldbc.dsl.codec.Decoder[City]} ``` In such cases, raising the search limit in the compile options may solve the problem. ```scala scalacOptions += "-Ximplicit-search-limit:100000" ``` However, optional limit extensions may lead to amplification of compile time. In that case, the solution may be to manually construct an arbitrary type as follows. ```scala 3 given Decoder[City] = Decoder.derived[City] // Or given Decoder[City] = (Decoder[Int] *: Decoder[String] *: Decoder[Int] *: ....).to[City] given Encoder[City] = Encoder.derived[City] // Or given Encoder[City] = (Encoder[Int] *: Encoder[String] *: Encoder[Int] *: ....).to[City] ``` Alternatively, you can use `Codec` to solve this problem by combining `Encoder` and `Decoder`. ```scala 3 given Codec[City] = Codec.derived[City] // Or given Codec[City] = (Codec[Int] *: Codec[String] *: Codec[Int] *: ....).to[City] ``` ### Change column refinement method Until now, column refinement has simply grouped columns to be used as tuples. ```scala 3 cityTable.select(city => (city.id, city.name)) ``` However, there was a problem with this. Columns are types with a single type parameter, and since Scala2 had a limit on the number of Tuples, it was necessary to create a boilerplate or something that could handle the full number of Tuples. In this case, the dynamic Tuple was treated as a Tuple or Tuple.Map, so if you wanted to access a Column type, you had to cast the type using asInstanceOf, since that type could only be treated as a Tuple. Casting the type, of course, would have made the code more complex, since type safety would have been lost. To solve this problem, we decided to adopt one of the same TypeLevel projects, [twiddles](https://github.com/typelevel/twiddles). With twiddles, we will be able to compose columns more easily. ```scala 3 cityTable.select(city => city.id *: city.name) ``` It also eliminates the need for unsafe typecasting, since the internal code can just use `Column[T]` instead of Tuple. Twiddles also make it easier to convert the composite result to another type. ```scala 3 case class City(id: Long, name: String) def id: Column[Int] = column[Int]("ID") def name: Column[String] = column[String]("Name") def city: Column[City] = (id *: name).to[City] ``` ### Change from Table to TableQuery Previously, the same Table type was used to construct queries from the model using the table type and table information. ```scala 3 case class City(id: Long, name: String) derives Table val cityTable = Table[City] ``` However, this implementation was easy to get wrong because the same type could be used to represent two things. ```scala 3 cityTable.select(city => city.insert(???)) ``` Development tools such as IDEs can cause no small amount of confusion to implementers because they complement all available APIs. To solve this problem, we have separated the Table and TableQuery types. ```scala 3 case class City(id: Long, name: String) derives Table val cityTable = TableQuery[City] ``` **Table name customization can be done in Table's derived** ```scala 3 case class City( id: Int, name: String, countryCode: String, district: String, population: Int ) object City: given Table[City] = Table.derived[City]("city") ``` ### Change in the way the update statement is constructed Previously, Update Statements had to be set one for each column to be updated. This implementation is convenient if you want to update several columns individually, but it is very tedious to write a set for each additional column you want to update. ```scala 3 cityTable .update("id", 1L) .set("name", "Tokyo") .set("population", 1, false) ``` With this update, columns can now be combined, allowing multiple columns to be specified together for update processing. ```scala 3 cityTable .update(city => city.id *: city.name)((1L, "Tokyo")) .set(_.population, 1, false) ``` With this update, columns can now be combined and multiple columns can be specified together for the update process. set can be used to update only certain columns. It is also possible to use set to set update conditions so that a query can be created that updates additional columns only if the condition is positive. ### Change the way table joins are constructed Previously, table joins were constructed by setting the join condition as the second argument. ```scala 3 cityTable.join(countryTable)((c, co) => c.countryCode === co.code) ``` With this change, table join conditions must be set with the `on` API. This change is the result of an internal implementation change. ```scala 3 cityTable.join(countryTable).on((c, co) => c.countryCode === co.code) ``` # Q: How do I add processing before and after the connection is established? ## A: To add processing before and after connection establishment, use `withBeforeAfter`. You can use the `withBeforeAfter` method when creating a Connection to invoke processes both before and after a connection is established. For example, as shown below, by passing arbitrary processes to `before` and `after`, you can output logs before and after connection establishment. The second argument of `withBeforeAfter` specifies the type of the Before process result that will be passed to After. ```scala 3 import ldbc.connector.* def before: Connection[IO] => IO[Unit] = _ => IO.println("Connecting to...") def after: (Unit, Connection[IO]) => IO[Unit] = (_, _) => IO.println("Connection Closed") val provider = ConnectionProvider ... .withBeforeAfter(before, after) ``` @:callout(warning) This feature is only available when using `ldbc-connector`. ```scala 3 libraryDependencies += "io.github.takapi327" %% "ldbc-connector" % "0.3.0-beta11" ``` @:@ ## References - [Using ldbc connector](/en/tutorial/Connection.md#using-the-ldbc-connector) # Q: How do I use my own type? ## A: To use your own custom type, first define an Encoder and Decoder for that type. This allows you to bind values to the database and extract values from query results as custom types. For example, you can define a Status type representing user status as follows: ```scala 3 // Definition of custom Status type enum Status(val active: Boolean, val label: String): case Active extends Status(true, "Active") case Inactive extends Status(false, "Inactive") // Define an Encoder to bind Status as a Boolean given Encoder[Status] = Encoder[Boolean].contramap(_.active) // Define a Decoder to convert from Boolean to Status given Decoder[Status] = Decoder[Boolean].map { case true => Status.Active case false => Status.Inactive } // Example of using Codec to integrate Encoder and Decoder given Codec[Status] = Codec[Boolean].imap(b => if b then Status.Active else Status.Inactive)(_.active) ``` In the sample above, the Status type is actually converted to a Boolean value and used in database INSERTs and query result decoding. This maintains type safety in database interactions and allows for easy integration of custom logic. Additionally, when creating a new type by combining multiple types, you can compose Encoders and Decoders as follows: ```scala 3 // Example of combining two values and converting from tuple to custom type case class CustomStatus(code: Int, label: String) given Encoder[CustomStatus] = (Encoder[Int] *: Encoder[String]).to[CustomStatus] given Decoder[CustomStatus] = (Decoder[Int] *: Decoder[String]).to[CustomStatus] // or given Codec[CustomStatus] = (Codec[Int] *: Codec[String]).to[CustomStatus] ``` By defining Encoders, Decoders, and Codecs for your custom types, you can naturally handle custom types in data operations through ldbc. ## References - [Custom Data Types](/en/tutorial/Custom-Data-Type.md) - [How to use Codec](/en/tutorial/Custom-Data-Type.md#codec) # Q: How do I use nested models? ## A: In ldbc, you can map multiple columns to nested models. For example, here's how to give a User model a nested Name model, which maps to separate columns (e.g., first_name, last_name) in the database: ```scala 3 // Definition of nested model case class User(id: Long, name: User.Name, email: String) object User: case class Name(firstName: String, lastName: String) // Table definition example class UserTable extends Table[User]("user"): def id: Column[Long] = bigint().autoIncrement.primaryKey def firstName: Column[String] = varchar(255) def lastName: Column[String] = varchar(255) def email: Column[String] = varchar(255) // Convert (firstName *: lastName) to User.Name, and map id, name, and email to User override def * : Column[User] = (id *: (firstName *: lastName).to[User.Name] *: email).to[User] // Usage example: TableQuery[UserTable].selectAll.query.to[List].foreach(println) // Records retrieved from UserTable's select are automatically converted to User.Name(firstName, lastName) ``` With this definition, the database columns `first_name` and `last_name` correspond to `firstName` and `lastName` in User.Name, allowing you to use them as a nested model. ## References - [Schema Definition Details](/en/tutorial/Schema.md) - [Custom Data Types](/en/tutorial/Custom-Data-Type.md) # Q: How to change column names in Query Builder? ## A: In Query Builder, there are mainly two methods available for changing column names in model definitions. ### A: 1. Using annotations You can specify column names used in queries by adding the `@Column` annotation to model fields. For example, if you want to treat the `name` field of the User model as `full_name`, define it as follows: ```scala 3 case class User( id: Int, @Column("full_name") name: String, email: String ) derives Table val query = TableQuery[User].select(user => user.id *: user.name *: user.email) // When generating the query, the name field is treated as "full_name" println(query.statement) // Output example: "SELECT `id`, `full_name`, `email` FROM user" ``` ### A: 2. Using the alias feature of Query Builder Another method is provided to specify an alias for columns during query construction without modifying the model definition. The example below shows how to change the column name during retrieval using the `alias` function or a custom mapping function: ```scala 3 import ldbc.dsl.codec.Codec import ldbc.query.builder.* case class User(id: Int, name: String, email: String) derives Table object User: given Codec[User] = Codec.derived[User] val userTable = TableQuery[User] // Build a query and specify aliases in the select clause val queryWithAlias = userTable .select(user => user.id *: user.name.as("full_name") *: user.email) println(queryWithAlias.statement) // Output example: "SELECT `id`, `name` AS `full_name`, email FROM user" ``` As shown above, you can change the format and display of column names in Query Builder by using annotations at model definition time or by specifying aliases during query construction. ## References - [How to use Query Builder](/en/tutorial/Query-Builder.md) - [Schema Definition Details](/en/tutorial/Schema.md) # Q: How to change the format of column names using the schema? ## A: In schema definitions, the format of column names can be changed by modifying the implicit instance of Naming. For example, the default is camel case, but using `Naming.PASCAL` converts all column names to Pascal case. The sample code below demonstrates setting `given Naming = Naming.PASCAL` within a table definition, which is automatically applied to column names. ```scala 3 // Schema definition example (changing column name format) case class User(id: Long, name: String, age: Option[Int]) class UserTable extends Table[User]("user"): // Set Naming to change column names to Pascal case given Naming = Naming.PASCAL def id: Column[Long] = bigint() // Automatically becomes "Id" def name: Column[String] = varchar(255) // Automatically becomes "Name" def age: Column[Option[Int]] = int().defaultNull // Automatically becomes "Age" override def * : Column[User] = (id *: name *: age).to[User] // Usage example: The changed column names are applied in select statements val userTable: TableQuery[UserTable] = TableQuery[UserTable] val select = userTable.selectAll println(select.statement) // Output example: "SELECT `Id`, `Name`, `Age` FROM user" ``` This method allows you to change column names in bulk, enabling consistent schema definitions aligned with your project's naming conventions. ## References - [Schema Definition Details](/en/tutorial/Schema.md) - [Custom Data Types](/en/tutorial/Custom-Data-Type.md) # Q: How to define complex queries with plain queries? ### A: You can define IN clauses by using the `in` function. For example, you can define an IN clause that embeds values from a list into an SQL statement as follows: ```scala // Sample: Generating an IN clause val ids = NonEmptyList.of(1, 2, 3) val inClause = in(sql"user.id", ids) // The generated SQL will be "(user.id IN (?, ?, ?))" ``` ### A: Use the `and` function to combine multiple SQL conditions with AND. In the example below, multiple conditions are concatenated with AND to build a single WHERE clause. ```scala // Sample: Generating AND conditions val cond1: SQL = sql"user.age > ?" // Example: age filter val cond2: SQL = sql"user.status = ?" // Example: status filter val andClause = and(NonEmptyList.of(cond1, cond2)) // The generated SQL will be "((user.age > ?) AND (user.status = ?))" ``` ### A: Use the `or` function to combine multiple conditions with OR. In the example below, multiple conditions are concatenated with OR to generate a flexible WHERE clause. ```scala // Sample: Generating OR conditions val condA: SQL = sql"user.country = ?" val condB: SQL = sql"user.region = ?" val orClause = or(NonEmptyList.of(condA, condB)) // The generated SQL will be "((user.country = ?) OR (user.region = ?))" ``` ### A: The `whereAnd` and `whereOr` functions are useful when dynamically building WHERE clauses. Using these, you can automatically generate WHERE clauses only when conditions exist. ```scala // Sample: Generating dynamic WHERE clauses val conditions: NonEmptyList[SQL] = NonEmptyList.of(sql"user.age > ?", sql"user.status = ?") val whereClause = whereAnd(conditions) // The generated SQL will be "WHERE (user.age > ?) AND (user.status = ?)" ``` ### A: The `comma` and `parentheses` functions are helpful in complex queries for concatenating multiple columns or conditions. These functions allow you to properly separate and group list-format SQL elements. ```scala // Sample: Concatenating columns and grouping val colList = comma(NonEmptyList.of(sql"user.id", sql"user.name", sql"user.email")) val grouped = parentheses(colList) // The generated SQL will be "(user.id, user.name, user.email)" ``` ## References - [Parameterized Queries](/en/tutorial/Parameterized-Queries.md) # Q: How to handle multiple databases (multi-tenant environment)? ## A: When dealing with multiple databases, create separate ConnectionProviders for each database. When handling multiple databases, create separate `ConnectionProvider` instances for each database. For example, you can create different providers for different databases as shown below, and switch between providers as needed. ```scala 3 val provider1 = ConnectionProvider .default[IO]("host", 3306, "user", "password", "database1") val provider2 = ConnectionProvider .default[IO]("host", 3306, "user", "password", "database2") // Switch between providers as needed val program1 = provider1.use { conn => /* operations on database1 */ } val program2 = provider2.use { conn => /* operations on database2 */ } ``` # Q: How to perform DDL with schema? ## A: To generate and execute DDL from a schema, use the `schema` method of TableQuery. The `schema` method is a convenient mechanism that automatically generates DDL statements such as CREATE, DROP, TRUNCATE, etc. from table definitions. The following sample code demonstrates how to generate DDL for `UserTable` and execute table creation, data deletion, and table deletion in sequence. ```scala 3 // ...existing code... // Example: DDL operations for UserTable // Assuming UserTable is defined val userSchema = TableQuery[UserTable].schema // Examples of generating DDL statements val createDDL = userSchema.createIfNotExists.statements val dropDDL = userSchema.dropIfExists.statements val truncateDDL = userSchema.truncate.statements // If you want to check the generated DDL statements createDDL.foreach(println) // CREATE TABLE statement will be output dropDDL.foreach(println) // DROP TABLE statement will be output truncateDDL.foreach(println) // TRUNCATE TABLE statement will be output // Example of executing DDL operations provider .use { conn => DBIO.sequence( userSchema.createIfNotExists, userSchema.truncate, userSchema.dropIfExists ) .commit(conn) } // ...existing code... ``` The code above implements operations that create tables while checking for their existence, reset table data as needed, and then delete the tables themselves. ## References - [Schema Definition Details](/en/tutorial/Schema.md) - [Database Operations](/en/tutorial/Database-Operations.md) # Q: How to use connection pool with Scala connector? ## A: Currently, the Scala connector does not support connection pooling. The Scala connector does not yet support connection pooling. If you want to use a connection pool, please use the Java connector with connection pool libraries like [HikariCP](https://github.com/brettwooldridge/HikariCP). ## References - [HikariCP](/en/examples/HikariCP.md) - [Using JDBC connector](/en/tutorial/Connection.md#using-the-jdbc-connector) # Q: How to handle the `Implicit search problem too large.` error? ```shell [error] |Implicit search problem too large. [error] |an implicit search was terminated with failure after trying 100000 expressions. [error] |The root candidate for the search was: [error] | [error] | given instance given_Decoder_P in object Decoder for ldbc.dsl.codec.Decoder[City]} ``` # A: When `Implicit search problem too large.` occurs, you can solve it by increasing the recursion depth using the `-Ximplicit-search-limit` option. When the `Implicit search problem too large.` error occurs, you can often solve the problem by increasing the search limit compilation option. You can set any value for the number indicating the recursion depth. ```scala 3 scalacOptions += "-Ximplicit-search-limit:100000" ``` However, expanding the limit through this option may lead to increased compilation time. In that case, you can also solve it by manually constructing the required type as shown below: ```scala 3 given Decoder[City] = Decoder.derived[City] // Or given Decoder[City] = (Decoder[Int] *: Decoder[String] *: Decoder[Int] *: ....).to[City] given Encoder[City] = Encoder.derived[City] // Or given Encoder[City] = (Encoder[Int] *: Encoder[String] *: Encoder[Int] *: ....).to[City] ``` Alternatively, you can solve it by using `Codec` to combine `Encoder` and `Decoder`. ```scala 3 given Codec[City] = Codec.derived[City] // Or given Codec[City] = (Codec[Int] *: Codec[String] *: Codec[Int] *: ....).to[City] ``` ## References - [Custom Data Type](/en/tutorial/Custom-Data-Type.md) # Q: Is there a function to limit the number of concurrent queries? ## A: Currently there is no built-in concurrency limit feature, but you can implement it using Cats Effect's Semaphore. You can implement a function to limit concurrent executions using Cats Effect's `Semaphore`. For example, by defining a `limitedConcurrency` function as follows, you can execute queries with a limited number of concurrent executions. ```scala 3 import cats.effect.* import cats.effect.syntax.all.* import ldbc.dsl.* def limitedConcurrency[A](program: DBIO[A], conn: Connection[IO], maxConcurrent: Int): IO[A] = Semaphore[IO](maxConcurrent).flatMap { sem => sem.permit.use(_ => program.readOnly(conn)) } ``` # Q: Is there a way to stream query results asynchronously? ## A: Currently, streaming API is not supported. Streaming is not currently supported, but it is planned for future versions. For now, please use the method to retrieve query results all at once. # Q: What is ldbc? ## A: ldbc is an OSS library that enables type-safe database access, query building, and schema definition. ldbc leverages Scala's power to prevent errors at compile time while allowing intuitive database operations. For example, let's start with how to build a schema. Here's an example of schema (table) definition: ```scala 3 // Schema definition example case class User(id: Long, name: String, email: String) class UserTable extends Table[User]("user"): def id: Column[Long] = bigint().autoIncrement.primaryKey def name: Column[String] = varchar(255) def email: Column[String] = varchar(255) override def * : Column[User] = (id *: name *: email).to[User] ``` In ldbc, after defining such a schema, you can abstract tables using TableQuery. ```scala // UserTable abstraction using TableQuery val userTable: TableQuery[UserTable] = TableQuery[UserTable] // This allows you to use the schema with QueryBuilder API ``` Next, here's an example of how to use the query builder with the schema definition above. We'll show how to insert and retrieve data using TableQuery based on the schema. ### Query Builder Example Using Schema ```scala 3 // Using UserTable and TableQuery defined in the schema val userTable: TableQuery[UserTable] = TableQuery[UserTable] // Data insertion using schema val schemaInsert: DBIO[Int] = (userTable += User(1, "Charlie", "charlie@example.com")).update // Data retrieval using schema (mapping to User) val schemaSelect = userTable.selectAll.query.to[List] // Execution example for _ <- schemaInsert.commit(conn) users <- schemaSelect.readOnly(conn) yield users.foreach(println) ``` You can also perform data operations using plain queries directly. For example, here's how to insert and retrieve data using plain SQL: ```scala 3 // Data insertion using plain query val plainInsert: DBIO[Int] = sql"INSERT INTO user (id, name, email) VALUES (2, 'Dave', 'dave@example.com')".update // Data retrieval using plain query (mapping to User) val plainSelect: DBIO[List[User]] = sql"SELECT id, name, email FROM user".query[User].to[List] // Execution example for _ <- plainInsert.commit(conn) users <- plainSelect.readOnly(conn) yield users.foreach(println) ``` As shown above, ldbc is an attractive library that enables intuitive data operations from schema construction to query building using those schemas through its simple yet powerful API. ## References - [How to Use Query Builder](/en/tutorial/Query-Builder.md) - [Schema Definition Details](/en/tutorial/Schema.md) # Q: What is the difference between a Java connector and a Scala connector? ## A: While both Java connector (jdbc-connector) and Scala connector (ldbc-connector) provide database connectivity, they differ in the following aspects: ### A: Java Connector (jdbc-connector) The Java connector uses traditional JDBC API to connect to databases. - Depends on JDBC drivers (e.g., `mysql-connector-j` for MySQL) and requires low-level configuration. - Connection establishment, query execution, and result retrieval are implemented using traditional procedural APIs. ```scala import com.mysql.cj.jdbc.MysqlDataSource import cats.effect.IO import ldbc.dsl.DBIO // Example of datasource configuration and JDBC connector usage val ds = new com.mysql.cj.jdbc.MysqlDataSource() ds.setServerName("127.0.0.1") ds.setPortNumber(13306) ds.setDatabaseName("world") ds.setUser("ldbc") ds.setPassword("password") val provider = ConnectionProvider.fromDataSource[IO](ex, ExecutionContexts.synchronous) // Example of using Java JDBC API-based connection provider.use { conn => // Execute SQL using PreparedStatement etc. DBIO.pure(()).commit(conn) } ``` ### A: Scala Connector (ldbc-connector) The Scala connector manages database connections leveraging type safety and functional programming. - Uses Cats Effect's `Resource` and `IO` for safe connection acquisition and release. - Enables intuitive data manipulation when combined with DSL and query builders. - Additionally, ldbc-connector works not only on JVM but also on non-JVM platforms like Scala.js and Scala Native. This makes database connectivity easily achievable in cross-platform development environments. ```scala import cats.effect.IO import ldbc.connector.* import ldbc.dsl.DBIO // Example of creating a connection using ldbc-connector (JVM, Scala.js, Scala Native compatible) val provider = ConnectionProvider .default[IO]("127.0.0.1", 3306, "ldbc", "password", "ldbc") .setSSL(SSL.Trusted) // Example of using Scala connector: automatically ensures connection closure using Resource internally provider.use { conn => // Can execute SQL using ldbc DSL and DBIO DBIO.pure(()).commit(conn) } ``` ### A: Key Differences - **API Design Philosophy**: While Java connector uses traditional procedural JDBC API as is, Scala connector is based on functional programming, achieving type-safe and declarative connection management. - **Error Handling and Resource Management**: Scala connector uses Cats Effect's `Resource` and `IO` for safe connection acquisition and release, enabling concise error handling. - **Integration**: Scala connector seamlessly integrates with ldbc DSL and query builders, providing a unified type-safe API from query definition to execution. ## References - [Connection](/en/tutorial/Connection.md) - [What is ldbc?](/en/qa/What-is-ldbc.md) # Q: Which dependencies should I set? ## A: To use ldbc, you need to set the following dependencies according to your needs. - Plain DSL - Query Builder - Schema Definition and Model Mapping **Connector** To perform database connection processing using ldbc, you need to set one of the following dependencies. **jdbc-connector** If you want to use the traditional connector written in Java, set the following dependencies: ```scala 3 libraryDependencies ++= Seq( "io.github.takapi327" %% "jdbc-connector" % "0.3.0-beta11", "com.mysql" % "mysql-connector-j" % "8.4.0" ) ``` **ldbc-connector** If you want to use the new connector written in Scala, set the following dependencies: ```scala 3 libraryDependencies ++= Seq( "io.github.takapi327" %% "ldbc-connector" % "0.3.0-beta11" ) ``` ldbc-connector works not only on JVM but also on JS and Native platforms. To use ldbc with Scala.js or Scala Native, set the dependencies as follows: ```scala 3 libraryDependencies ++= Seq( "com.example" %%% "ldbc-connector" % "0.3.0-beta11" ) ``` ### Plain DSL To use the plain DSL, set the following dependencies: ```scala 3 libraryDependencies ++= Seq( "io.github.takapi327" %% "ldbc-dsl" % "0.3.0-beta11" ) ``` Plain DSL is a method to write simple SQL statements as they are. For example, you can execute queries using SQL literals directly. ```scala import ldbc.dsl.* val plainResult = sql"SELECT name FROM user" .query[String] .to[List] .readOnly(conn) // plainResult is returned as List[String] ``` ### Query Builder To use the query builder, set the following dependencies: ```scala 3 libraryDependencies ++= Seq( "io.github.takapi327" %% "ldbc-query-builder" % "0.3.0-beta11" ) ``` Query builder is a method to construct queries with type-safe API. In the following example, we define a `User` model and build a SELECT statement using `TableQuery`. ```scala 3 import ldbc.dsl.codec.Codec import ldbc.query.builder.* case class User(id: Int, name: String, email: String) derives Table object User: given Codec[User] = Codec.derived[User] val userQuery = TableQuery[User] .select(user => user.id *: user.name *: user.email) .where(_.email === "alice@example.com") // userQuery.statement is generated as "SELECT id, name, email FROM user WHERE email = ?" ``` ### Schema Definition and Model Mapping To use schema definition and model mapping, set the following dependencies: ```scala 3 libraryDependencies ++= Seq( "io.github.takapi327" %% "ldbc-schema" % "0.3.0-beta11" ) ``` Schema definition and model mapping enable one-to-one mapping between table definitions and Scala models. Here's an example of defining a `User` table: ```scala 3 import ldbc.schema.* case class User(id: Long, name: String, email: String) class UserTable extends Table[User]("user"): def id: Column[Long] = bigint().autoIncrement.primaryKey def name: Column[String] = varchar(255) def email: Column[String] = varchar(255) override def * : Column[User] = (id *: name *: email).to[User] val userQuery = TableQuery[UserTable] .select(user => user.id *: user.name *: user.email) .where(_.email === "alice@example.com") // userQuery.statement is generated as "SELECT id, name, email FROM user WHERE email = ?" ``` ## References - [How to use Query Builder](/en/tutorial/Query-Builder.md) - [Schema Definition Details](/en/tutorial/Schema.md) - [Plain DSL Usage Examples](/en/tutorial/Selecting-Data.md) - [Database Connection](/en/tutorial/Connection.md) - [Parameterized Queries](/en/tutorial/Parameterized-Queries.md) # QA This section contains questions and answers about ldbc. ## Table of Contents @:navigationTree { entries = [ { target = "/en/qa", depth = 2 } ] } # Connector This chapter describes database connections using ldbc's own MySQL connector. To connect to MySQL databases in Scala, you need to use JDBC. JDBC is a standard Java API that can also be used in Scala. Since JDBC is implemented in Java, when using it in Scala, it can only operate in the JVM environment. In recent years, the Scala ecosystem has seen active development of plugins to enable operation in environments like JS and Native. Scala continues to evolve from a language that only operates in JVM environments using Java assets to one that can operate in multi-platform environments. However, JDBC is a standard Java API and does not support operation in Scala's multi-platform environments. Therefore, even if you create a Scala application to run in JS, Native, etc., you cannot connect to databases like MySQL because JDBC cannot be used. The Typelevel Project includes a Scala library for [PostgreSQL](https://www.postgresql.org/) called [Skunk](https://github.com/typelevel/skunk). This project does not use JDBC and achieves PostgreSQL connections using pure Scala. As a result, you can connect to PostgreSQL regardless of whether you're in JVM, JS, or Native environments when using Skunk. The ldbc connector is a project being developed to enable MySQL connections regardless of whether you're in JVM, JS, or Native environments, inspired by Skunk. The ldbc connector is the lowest layer API. We plan to provide higher-layer APIs using this connector in the future. We also plan to maintain compatibility with existing higher-layer APIs. To use it, you need to set up the following dependency in your project: **JVM** ```scala 3 libraryDependencies += "io.github.takapi327" %% "ldbc-connector" % "0.3.0-beta11" ``` **JS/Native** ```scala 3 libraryDependencies += "io.github.takapi327" %%% "ldbc-connector" % "0.3.0-beta11" ``` **Supported Versions** The current version supports the following versions of MySQL: - MySQL 5.7.x - MySQL 8.x The main support is for MySQL 8.x. MySQL 5.7.x is sub-supported. Therefore, caution is required when operating with MySQL 5.7.x. In the future, support for MySQL 5.7.x is planned to be discontinued. ## Connection To connect to MySQL using the ldbc connector, use `ConnectionProvider`. ```scala 3 import cats.effect.IO import ldbc.connector.ConnectionProvider val provider = ConnectionProvider .default[IO]( host = "127.0.0.1", port = 3306, user = "root", ) ``` Below is a list of properties that can be set when constructing a `Connection`. | Property | Details | Required | |---------------------------|---------------------------------------------------------------|----| | `host` | `Database host information` | ✅ | | `port` | `Database port information` | ✅ | | `user` | `Database user information` | ✅ | | `password` | `Database password information (default: None)` | ❌ | | `database` | `Database name information (default: None)` | ❌ | | `debug` | `Whether to display debug information (default: false)` | ❌ | | `ssl` | `SSL settings (default: SSL.None)` | ❌ | | `socketOptions` | `Specify socket options for TCP/UDP sockets (default: defaultSocketOptions)` | ❌ | | `readTimeout` | `Specify timeout duration (default: Duration.Inf)` | ❌ | | `allowPublicKeyRetrieval` | `Whether to retrieve public key (default: false)` | ❌ | | `logHandler` | `Log output settings` | ❌ | | `before` | `Processing to be executed after connection is established` | ❌ | | `after` | `Processing to be executed before disconnecting` | ❌ | | `tracer` | `Tracer settings for metrics output (default: Tracer.noop)` | ❌ | `ConnectionProvider` uses `Resource` for resource management. Therefore, when using connection information, use the `use` method to manage resources. ```scala 3 provider.use { conn => // Write code } ``` ### Authentication In MySQL, authentication occurs when the client connects to the MySQL server and sends user information during the LoginRequest phase. The server then searches for the sent user in the `mysql.user` table and determines which authentication plugin to use. After determining the authentication plugin, the server calls the plugin to start user authentication and sends the result to the client. Thus, authentication in MySQL is pluggable (various types of plugins can be attached and detached). The authentication plugins supported by MySQL are listed on the [official page](https://dev.mysql.com/doc/refman/8.0/en/authentication-plugins.html). ldbc currently supports the following authentication plugins: - Native Pluggable Authentication - SHA-256 Pluggable Authentication - SHA-2 Pluggable Authentication Cache *Note: Native Pluggable Authentication and SHA-256 Pluggable Authentication are deprecated plugins from MySQL 8.x. It is recommended to use the SHA-2 Pluggable Authentication Cache unless there is a specific reason.* You do not need to be aware of authentication plugins in the ldbc application code. Users should create users with the desired authentication plugin on the MySQL database and use those users to attempt connections to MySQL in the ldbc application code. ldbc internally determines the authentication plugin and connects to MySQL using the appropriate authentication plugin. ## Execution The following table is assumed to be used in subsequent processes. ```sql CREATE TABLE users ( id BIGINT AUTO_INCREMENT PRIMARY KEY, name VARCHAR(255) NOT NULL, age INT NULL ); ``` ### Statement `Statement` is an API for executing SQL without dynamic parameters. *Note: Since `Statement` does not use dynamic parameters, there is a risk of SQL injection depending on how it is used. Therefore, it is recommended to use `PreparedStatement` when using dynamic parameters.* Use the `createStatement` method of `Connection` to construct a `Statement`. #### Read Query To execute read-only SQL, use the `executeQuery` method. The values returned from the MySQL server as a result of executing the query are stored in `ResultSet` and returned as the return value. ```scala 3 3 provider.use { conn => for statement <- conn.createStatement() result <- statement.executeQuery("SELECT * FROM users") yield // Processing using ResultSet } ``` #### Write Query To execute SQL for writing, use the `executeUpdate` method. The values returned from the MySQL server as a result of executing the query are the number of affected rows returned as the return value. ```scala 3 provider.use { conn => for statement <- conn.createStatement() result <- statement.executeUpdate("INSERT INTO users (name, age) VALUES ('Alice', 20)") yield } ``` #### Retrieve AUTO_INCREMENT Value To retrieve the AUTO_INCREMENT value after executing a query using `Statement`, use the `getGeneratedKeys` method. The values returned from the MySQL server as a result of executing the query are the values generated by AUTO_INCREMENT returned as the return value. ```scala 3 provider.use { conn => for statement <- conn.createStatement() _ <- statement.executeUpdate("INSERT INTO users (name, age) VALUES ('Alice', 20)", Statement.RETURN_GENERATED_KEYS) generatedKeys <- statement.getGeneratedKeys() yield } ``` ### Client/Server PreparedStatement ldbc provides `PreparedStatement` divided into `Client PreparedStatement` and `Server PreparedStatement`. `Client PreparedStatement` is an API for constructing SQL with dynamic parameters on the application and sending it to the MySQL server. Therefore, the query sending method to the MySQL server is the same as `Statement`. This API corresponds to JDBC's `PreparedStatement`. For a safer way to construct queries within the MySQL server, use `Server PreparedStatement`. `Server PreparedStatement` is an API for preparing queries to be executed within the MySQL server in advance and setting parameters on the application for execution. With `Server PreparedStatement`, the query sending and parameter sending are separated, allowing query reuse. When using `Server PreparedStatement`, prepare the query in advance on the MySQL server. The MySQL server uses memory to store it, but query reuse is possible, leading to performance improvements. However, the prepared query continues to use memory until it is released, posing a risk of memory leaks. When using `Server PreparedStatement`, use the `close` method to properly release the query. #### Client PreparedStatement Use the `clientPreparedStatement` method of `Connection` to construct a `Client PreparedStatement`. ```scala 3 provider.use { conn => for statement <- conn.clientPreparedStatement("SELECT * FROM users WHERE id = ?") ... yield ... } ``` #### Server PreparedStatement Use the `serverPreparedStatement` method of `Connection` to construct a `Server PreparedStatement`. ```scala 3 provider.use { conn => for statement <- conn.serverPreparedStatement("SELECT * FROM users WHERE id = ?") ... yield ... } ``` #### Read Query To execute read-only SQL, use the `executeQuery` method. The values returned from the MySQL server as a result of executing the query are stored in `ResultSet` and returned as the return value. ```scala 3 provider.use { conn => for statement <- conn.clientPreparedStatement("SELECT * FROM users WHERE id = ?") // or conn.serverPreparedStatement("SELECT * FROM users WHERE id = ?") _ <- statement.setLong(1, 1) result <- statement.executeQuery() yield // Processing using ResultSet } ``` When using dynamic parameters, use the `setXXX` method to set the parameters. The `setXXX` method can also use the `Option` type. If `None` is passed, NULL is set as the parameter. The `setXXX` method specifies the parameter index and the parameter value. ```scala 3 statement.setLong(1, 1) ``` The following methods are currently supported. | Method | Type | Remarks | |-----------------|---------------------------------------|-----------------------------------| | `setNull` | | Sets NULL as the parameter | | `setBoolean` | `Boolean/Option[Boolean]` | | | `setByte` | `Byte/Option[Byte]` | | | `setShort` | `Short/Option[Short]` | | | `setInt` | `Int/Option[Int]` | | | `setLong` | `Long/Option[Long]` | | | `setBigInt` | `BigInt/Option[BigInt]` | | | `setFloat` | `Float/Option[Float]` | | | `setDouble` | `Double/Option[Double]` | | | `setBigDecimal` | `BigDecimal/Option[BigDecimal]` | | | `setString` | `String/Option[String]` | | | `setBytes` | `Array[Byte]/Option[Array[Byte]]` | | | `setDate` | `LocalDate/Option[LocalDate]` | Directly handles `java.time` instead of `java.sql`. | | `setTime` | `LocalTime/Option[LocalTime]` | Directly handles `java.time` instead of `java.sql`. | | `setTimestamp` | `LocalDateTime/Option[LocalDateTime]` | Directly handles `java.time` instead of `java.sql`. | | `setYear` | `Year/Option[Year]` | Directly handles `java.time` instead of `java.sql`. | #### Write Query To execute SQL for writing, use the `executeUpdate` method. The values returned from the MySQL server as a result of executing the query are the number of affected rows returned as the return value. ```scala 3 provider.use { conn => for statement <- conn.clientPreparedStatement("INSERT INTO users (name, age) VALUES (?, ?)") // or conn.serverPreparedStatement("INSERT INTO users (name, age) VALUES (?, ?)") _ <- statement.setString(1, "Alice") _ <- statement.setInt(2, 20) result <- statement.executeUpdate() yield result } ``` #### Retrieve AUTO_INCREMENT Value To retrieve the AUTO_INCREMENT value after executing a query, use the `getGeneratedKeys` method. The values returned from the MySQL server as a result of executing the query are the values generated by AUTO_INCREMENT returned as the return value. ```scala 3 provider.use { conn => for statement <- conn.clientPreparedStatement("INSERT INTO users (name, age) VALUES (?, ?)", Statement.RETURN_GENERATED_KEYS) // or conn.serverPreparedStatement("INSERT INTO users (name, age) VALUES (?, ?)", Statement.RETURN_GENERATED_KEYS) _ <- statement.setString(1, "Alice") _ <- statement.setInt(2, 20) _ <- statement.executeUpdate() getGeneratedKeys <- statement.getGeneratedKeys() yield getGeneratedKeys } ``` ### ResultSet `ResultSet` is an API for storing the values returned from the MySQL server after executing a query. To retrieve records obtained by executing SQL from `ResultSet`, you can use the `next` method and `getXXX` methods similar to JDBC, or use the ldbc-specific `decode` method. #### next/getXXX The `next` method returns `true` if the next record exists, and `false` if the next record does not exist. The `getXXX` method is an API for retrieving values from records. The `getXXX` method can specify the column index or the column name to retrieve. ```scala 3 provider.use { conn => for statement <- conn.clientPreparedStatement("SELECT `id`, `name`, `age` FROM users WHERE id = ?") _ <- statement.setLong(1, 1) result <- statement.executeQuery() yield val builder = List.newBuilder[(Long, String, Int)] while resultSet.next() do val id = resultSet.getLong(1) val name = resultSet.getString("name") val age = resultSet.getInt(3) builder += (id, name, age) builder.result() } ``` ## Transaction To execute transactions using `Connection`, combine the `setAutoCommit` method, `commit` method, and `rollback` method. First, use the `setAutoCommit` method to disable automatic transaction commit. ```scala 3 conn.setAutoCommit(false) ``` After performing some processing, use the `commit` method to commit the transaction. ```scala 3 for statement <- conn.clientPreparedStatement("INSERT INTO users (name, age) VALUES (?, ?)") _ <- statement.setString(1, "Alice") _ <- statement.setInt(2, 20) _ <- conn.setAutoCommit(false) result <- statement.executeUpdate() _ <- conn.commit() yield ``` Alternatively, use the `rollback` method to roll back the transaction. ```scala 3 for statement <- conn.clientPreparedStatement("INSERT INTO users (name, age) VALUES (?, ?)") _ <- statement.setString(1, "Alice") _ <- statement.setInt(2, 20) _ <- conn.setAutoCommit(false) result <- statement.executeUpdate() _ <- conn.rollback() yield ``` When the `setAutoCommit` method is used to disable automatic transaction commit, the connection is automatically rolled back when the Resource is released. ### Transaction Isolation Level ldbc allows you to set the transaction isolation level. The transaction isolation level is set using the `setTransactionIsolation` method. MySQL supports the following transaction isolation levels: - READ UNCOMMITTED - READ COMMITTED - REPEATABLE READ - SERIALIZABLE For more information on MySQL transaction isolation levels, refer to the [official documentation](https://dev.mysql.com/doc/refman/8.0/en/innodb-transaction-isolation-levels.html). ```scala 3 import ldbc.connector.Connection.TransactionIsolationLevel conn.setTransactionIsolation(TransactionIsolationLevel.REPEATABLE_READ) ``` To retrieve the currently set transaction isolation level, use the `getTransactionIsolation` method. ```scala 3 for isolationLevel <- conn.getTransactionIsolation() yield ``` ### Savepoints For more advanced transaction management, you can use the "Savepoint" feature. This allows you to mark specific points during database operations, enabling you to roll back to those points if something goes wrong. This is particularly useful for complex database operations or when you need to set safe points during long transactions. **Features:** - Flexible Transaction Management: Use savepoints to create "checkpoints" at any point within a transaction. If needed, you can roll back to that point, saving time and improving efficiency. - Error Recovery: Instead of redoing everything from the beginning when an error occurs, you can roll back to the last safe savepoint, saving time and improving efficiency. - Advanced Control: By setting multiple savepoints, you can achieve more precise transaction control. Developers can easily implement more complex logic and error handling. By leveraging this feature, your application can achieve more robust and reliable database operations. **Setting Savepoints** To set a savepoint, use the `setSavepoint` method. You can specify the name of the savepoint with this method. If you do not specify a name for the savepoint, a default name generated by UUID will be set. You can retrieve the name of the set savepoint using the `getSavepointName` method. *Note: In MySQL, auto-commit is enabled by default. To use savepoints, you need to disable auto-commit. Otherwise, all operations will be committed each time, and you will not be able to roll back transactions using savepoints.* ```scala 3 for _ <- conn.setAutoCommit(false) savepoint <- conn.setSavepoint("savepoint1") yield savepoint.getSavepointName ``` **Rolling Back to Savepoints** To roll back a part of a transaction using a savepoint, pass the savepoint to the `rollback` method. After partially rolling back using a savepoint, if you commit the entire transaction, the transaction after that savepoint will not be committed. ```scala 3 for _ <- conn.setAutoCommit(false) savepoint <- conn.setSavepoint("savepoint1") _ <- conn.rollback(savepoint) _ <- conn.commit() yield ``` **Releasing Savepoints** To release a savepoint, pass the savepoint to the `releaseSavepoint` method. After releasing a savepoint, if you commit the entire transaction, the transaction after that savepoint will be committed. ```scala 3 for _ <- conn.setAutoCommit(false) savepoint <- conn.setSavepoint("savepoint1") _ <- conn.releaseSavepoint(savepoint) _ <- conn.commit() yield ``` ## Utility Commands MySQL has several utility commands. ([Reference](https://dev.mysql.com/doc/dev/mysql-server/latest/page_protocol_command_phase_utility.html)) ldbc provides APIs to use these commands. | Command | Purpose | Supported | |------------------------|--------------------------------------|------| | `COM_QUIT` | `Informs the server that the client is requesting to close the connection.` | ✅ | | `COM_INIT_DB` | `Changes the default schema of the connection.` | ✅ | | `COM_STATISTICS` | `Retrieves internal status string in a readable format.` | ✅ | | `COM_DEBUG` | `Dumps debug information to the server's standard output.` | ❌ | | `COM_PING` | `Checks if the server is alive.` | ✅ | | `COM_CHANGE_USER` | `Changes the user of the current connection.` | ✅ | | `COM_RESET_CONNECTION` | `Resets the session state.` | ✅ | | `COM_SET_OPTION` | `Sets options for the current connection.` | ✅ | ### COM QUIT `COM_QUIT` is a command to inform the server that the client is requesting to close the connection. In ldbc, you can close the connection using the `close` method of `Connection`. When you use the `close` method, the connection is closed, and you cannot use the connection in subsequent processing. *Note: `Connection` uses `Resource` for resource management. Therefore, you do not need to use the `close` method to release resources.* ```scala 3 provider.use { conn => conn.close() } ``` ### COM INIT DB `COM_INIT_DB` is a command to change the default schema of the connection. In ldbc, you can change the default schema using the `setSchema` method of `Connection`. ```scala 3 provider.use { conn => conn.setSchema("test") } ``` ### COM STATISTICS `COM_STATISTICS` is a command to retrieve internal status string in a readable format. In ldbc, you can retrieve the internal status string using the `getStatistics` method of `Connection`. ```scala 3 provider.use { conn => conn.getStatistics } ``` The status that can be retrieved is as follows: - `uptime` : Time since the server started - `threads` : Number of currently connected clients - `questions` : Number of queries since the server started - `slowQueries` : Number of slow queries - `opens` : Number of table opens since the server started - `flushTables` : Number of table flushes since the server started - `openTables` : Number of currently open tables - `queriesPerSecondAvg` : Average number of queries per second ### COM PING `COM_PING` is a command to check if the server is alive. In ldbc, you can check if the server is alive using the `isValid` method of `Connection`. If the server is alive, it returns `true`, and if it is not alive, it returns `false`. ```scala 3 provider.use { conn => conn.isValid } ``` ### COM CHANGE USER `COM_CHANGE_USER` is a command to change the user of the current connection. It also resets the following connection states: - User variables - Temporary tables - Prepared statements - etc... In ldbc, you can change the user using the `changeUser` method of `Connection`. ```scala 3 provider.use { conn => conn.changeUser("root", "password") } ``` ### COM RESET CONNECTION `COM_RESET_CONNECTION` is a command to reset the session state. `COM_RESET_CONNECTION` is a lighter version of `COM_CHANGE_USER`, with almost the same functionality to clean up the session state, but with the following features: - Does not re-authenticate (thus avoiding extra client/server exchanges). - Does not close the connection. In ldbc, you can reset the session state using the `resetServerState` method of `Connection`. ```scala 3 provider.use { conn => conn.resetServerState } ``` ### COM SET OPTION `COM_SET_OPTION` is a command to set options for the current connection. In ldbc, you can set options using the `enableMultiQueries` method and `disableMultiQueries` method of `Connection`. Using the `enableMultiQueries` method allows you to execute multiple queries at once. Using the `disableMultiQueries` method prevents you from executing multiple queries at once. *Note: This can only be used for batch processing with Insert, Update, and Delete statements. If used with Select statements, only the result of the first query will be returned.* ```scala 3 provider.use { conn => conn.enableMultiQueries *> conn.disableMultiQueries } ``` ## Batch Commands ldbc allows you to execute multiple queries at once using batch commands. By using batch commands, you can execute multiple queries at once, reducing the number of network round trips. To use batch commands, use the `addBatch` method of `Statement` or `PreparedStatement` to add queries, and use the `executeBatch` method to execute the queries. ```scala 3 3 provider.use { conn => for statement <- conn.createStatement() _ <- statement.addBatch("INSERT INTO users (name, age) VALUES ('Alice', 20)") _ <- statement.addBatch("INSERT INTO users (name, age) VALUES ('Bob', 30)") result <- statement.executeBatch() yield result } ``` In the above example, you can add the data of `Alice` and `Bob` at once. The executed query will be as follows: ```sql INSERT INTO users (name, age) VALUES ('Alice', 20);INSERT INTO users (name, age) VALUES ('Bob', 30); ``` The return value after executing the batch command is an array of the number of affected rows for each executed query. In the above example, the data of `Alice` is added as 1 row, and the data of `Bob` is also added as 1 row, so the return value will be `List(1, 1)`. After executing the batch command, the queries added with the `addBatch` method so far are cleared. To manually clear the queries, use the `clearBatch` method to clear them. ```scala 3 provider.use { conn => for statement <- conn.createStatement() _ <- statement.addBatch("INSERT INTO users (name, age) VALUES ('Alice', 20)") _ <- statement.clearBatch() _ <- statement.addBatch("INSERT INTO users (name, age) VALUES ('Bob', 30)") _ <- statement.executeBatch() yield } ``` In the above example, the data of `Alice` is not added, but the data of `Bob` is added. ### Difference Between Statement and PreparedStatement There may be differences in the queries executed by batch commands when using `Statement` and `PreparedStatement`. When executing an INSERT statement with batch commands using `Statement`, multiple queries are executed at once. However, when executing an INSERT statement with batch commands using `PreparedStatement`, one query is executed. For example, when executing the following query with batch commands, multiple queries are executed at once because `Statement` is used. ```scala 3 provider.use { conn => for statement <- conn.createStatement() _ <- statement.addBatch("INSERT INTO users (name, age) VALUES ('Alice', 20)") _ <- statement.addBatch("INSERT INTO users (name, age) VALUES ('Bob', 30)") result <- statement.executeBatch() yield result } // Executed query // INSERT INTO users (name, age) VALUES ('Alice', 20);INSERT INTO users (name, age) VALUES ('Bob', 30); ``` However, when executing the following query with batch commands, one query is executed because `PreparedStatement` is used. ```scala 3 provider.use { conn => for statement <- conn.clientPreparedStatement("INSERT INTO users (name, age) VALUES (?, ?)") _ <- statement.setString(1, "Alice") _ <- statement.setInt(2, 20) _ <- statement.addBatch() _ <- statement.setString(1, "Bob") _ <- statement.setInt(2, 30) _ <- statement.addBatch() result <- statement.executeBatch() yield result } // Executed query // INSERT INTO users (name, age) VALUES ('Alice', 20), ('Bob', 30); ``` This is because when using `PreparedStatement`, you can set multiple parameters in one query by using the `addBatch` method after setting the query parameters. ## Executing Stored Procedures ldbc provides an API for executing stored procedures. To execute a stored procedure, use the `prepareCall` method of `Connection` to construct a `CallableStatement`. *Note: The stored procedure used is from the [official](https://dev.mysql.com/doc/connector-j/en/connector-j-usagenotes-statements-callable.html) documentation.* ```sql CREATE PROCEDURE demoSp(IN inputParam VARCHAR(255), INOUT inOutParam INT) BEGIN DECLARE z INT; SET z = inOutParam + 1; SET inOutParam = z; SELECT inputParam; SELECT CONCAT('zyxw', inputParam); END ``` To execute the above stored procedure, it will be as follows. ```scala 3 provider.use { conn => for callableStatement <- conn.prepareCall("CALL demoSp(?, ?)") _ <- callableStatement.setString(1, "abcdefg") _ <- callableStatement.setInt(2, 1) hasResult <- callableStatement.execute() values <- Monad[IO].whileM[List, Option[String]](callableStatement.getMoreResults()) { for resultSet <- callableStatement.getResultSet().flatMap { case Some(rs) => IO.pure(rs) case None => IO.raiseError(new Exception("No result set")) } yield resultSet.getString(1) } yield values // List(Some("abcdefg"), Some("zyxwabcdefg")) } ``` To retrieve the value of an output parameter (a parameter specified as OUT or INOUT when creating the stored procedure), in JDBC, you need to specify the parameter before executing the statement using various `registerOutputParameter()` methods of the CallableStatement interface. However, in ldbc, you only need to set the parameter using the `setXXX` method, and the parameter will be set when executing the query. However, you can also specify the parameter using the `registerOutputParameter()` method in ldbc. ```scala 3 provider.use { conn => for callableStatement <- conn.prepareCall("CALL demoSp(?, ?)") _ <- callableStatement.setString(1, "abcdefg") _ <- callableStatement.setInt(2, 1) _ <- callableStatement.registerOutParameter(2, ldbc.connector.data.Types.INTEGER) hasResult <- callableStatement.execute() value <- callableStatement.getInt(2) yield value // 2 } ``` *Note: When specifying an Out parameter with `registerOutParameter`, if the same index value is not used to set the parameter with the `setXXX` method, the value will be set to `Null` on the server.* ## Unsupported Features The ldbc connector is currently an experimental feature. Therefore, the following features are not supported. Feature provision will be carried out sequentially. - Connection pooling - Failover measures - etc... # Performance Currently supported Java versions are 11, 17, and 21+. We compared the performance of ldbc and jdbc for each version. | Operation | 11 | 17 | 21+ | |-----------|:--:|:--:|:---:| | Reading | 🔺 | 🔺 | ✅ | | Writing | ✅ | ✅ | ✅ | - 🔺: Room for performance improvement - ✅: Stable performance As the performance results show, JDK 21+, the latest version, demonstrates the highest performance. Particularly, ldbc's prepareStatement and statement maintain high throughput and remain stable even under high load. When using ldbc, we recommend using JDK 21 or higher. ## Query Execution Overhead ### Writing The benchmark measured operations per second (ops/s) for different operations. The following graphs show the performance of ldbc and jdbc for each JDK version. **JDK 11** The performance of ldbc and jdbc is nearly equivalent, with ldbc showing slight advantages in specific operations. @:image(/img/connector/Insert.svg) { alt = "Select Benchmark (JDK 11)" } **JDK 17** Compared to JDK 11, we can see that ldbc's performance has improved. In particular, response times for complex queries have been reduced. While jdbc has also improved in performance, ldbc consistently shows higher performance. @:image(/img/connector/Insert17.svg) { alt = "Select Benchmark (JDK 17)" } **JDK 21** The performance of ldbc has further improved, with many results surpassing jdbc. The advantages of ldbc are particularly noticeable in operations handling large amounts of data. @:image(/img/connector/Insert21.svg) { alt = "Select Benchmark (JDK 21)" } From the benchmark results, ldbc shows improvement in performance as JDK versions increase and consistently demonstrates higher performance compared to jdbc. Especially when using JDK 21, ldbc delivers excellent performance for large-scale data operations. Considering these results, using ldbc is likely to dispel performance concerns. In particular, using the latest JDK will allow you to maximize the benefits of ldbc. ### Reading We compared the performance of ldbc and jdbc based on benchmark results using JDK 21. The following graphs show operations per second (ops/s) for different operations. - ldbc: prepareStatement - blue line - ldbc: statement - orange line - jdbc: prepareStatement - green line - jdbc: statement - red line It shows the highest throughput among all versions. Both `ldbc: prepareStatement` and `jdbc: statement` demonstrate excellent performance, indicating that the latest optimizations are effective. @:image(/img/connector/Select21.svg) { alt = "Select Benchmark (JDK 21)" } #### Performance Comparison **1. ldbc: prepareStatement vs jdbc: prepareStatement** ldbc's prepareStatement consistently shows higher performance compared to jdbc's prepareStatement. Especially under high load, ldbc maintains stable operation counts. **2. ldbc: statement vs jdbc: statement** ldbc's statement also demonstrates superior performance compared to jdbc's statement. ldbc operates more efficiently than jdbc, particularly for complex queries and processing large volumes of data. From the benchmark results, ldbc shows superior performance compared to jdbc in the following aspects: - High throughput: ldbc has higher operations per second than jdbc and remains stable even under high load. - Efficient resource usage: ldbc operates efficiently even with complex queries and large data processing, optimizing resource usage. Based on these results, using ldbc can eliminate performance concerns. ldbc is an excellent choice, especially in high-load environments or scenarios requiring large-scale data processing. #### Other Versions Here are the benchmark results for other versions: **JDK 11** In this version, throughput tends to be lower compared to other versions. In particular, the performance of `ldbc: prepareStatement` is inferior compared to other versions. @:image(/img/connector/Select.svg) { alt = "Select Benchmark (JDK 11)" } **JDK 17** Compared to JDK 11, overall throughput has improved. The performance of `jdbc: prepareStatement` has particularly improved, showing stable results. @:image(/img/connector/Select17.svg) { alt = "Select Benchmark (JDK 17)" } # Reference This section describes the core abstractions and mechanical details of ldbc. ## Table of Contents @:navigationTree { entries = [ { target = "/en/reference", depth = 2 } ] } # Connection We've completed the [Setup](/en/tutorial/Setup.md) in the previous page. In this page, we'll learn in detail how to safely connect to databases. In ldbc, the concept of "connection" plays a central role in managing database connections. A connection provides resources to establish a connection to a database, execute queries, and safely close the connection. This connection management is performed safely using the Resource type from cats-effect. ## Types of Connectors ldbc supports two types of connection methods. **jdbc connector** - Connection method using standard JDBC drivers - Can utilize existing JDBC drivers as they are - Familiar to developers who are used to JDBC - Easy integration with other JDBC-based tools **ldbc connector** - Dedicated connector optimized by ldbc - Performance optimized for MySQL protocol - High functional extensibility - More configuration options available Let's take a closer look at each connection method. ## Using the JDBC Connector The JDBC connector establishes connections using standard JDBC drivers. It's recommended when you want to leverage existing JDBC knowledge or prioritize compatibility with other JDBC-based tools. ### Adding Dependencies First, add the necessary dependencies. When using the jdbc connector, you need to add the MySQL connector as well. ```scala //> dep "io.github.takapi327::jdbc-connector:0.3.0-beta11" //> dep "com.mysql":"mysql-connector-j":"8.4.0" ``` ### Connection using DataSource The most common method is using a `DataSource`. This allows advanced features such as connection pooling. ```scala // Required imports import cats.effect.IO import jdbc.connector.* // Set up MySQL data source val ds = new com.mysql.cj.jdbc.MysqlDataSource() ds.setServerName("127.0.0.1") ds.setPortNumber(13306) ds.setDatabaseName("world") ds.setUser("ldbc") ds.setPassword("password") // Create a connection provider val provider = ConnectionProvider .fromDataSource[IO](ds, ExecutionContexts.synchronous) // Use the connection val program = provider.use { connection => connection.execute("SELECT 1") } ``` ### Connection using DriverManager You can also connect using the `DriverManager`. This is convenient for simple applications or script execution. ```scala // Required imports import cats.effect.IO import jdbc.connector.* // Create a provider from DriverManager val provider = ConnectionProvider .fromDriverManager[IO] .apply( "com.mysql.cj.jdbc.Driver", "jdbc:mysql://127.0.0.1:13306/world", "ldbc", "password", None // Log handler is optional ) // Use the connection val program = provider.use { connection => connection.execute("SELECT 1") } ``` ### Using Existing Connection If you already have an established `java.sql.Connection` object, you can wrap and use it: ```scala // Existing java.sql.Connection val jdbcConnection: java.sql.Connection = ??? // Convert to ldbc connection val provider = ConnectionProvider.fromConnection[IO](jdbcConnection) // Use the connection val program = provider.use { connection => connection.execute("SELECT 1") } ``` ## Using the ldbc Connector The ldbc connector is an optimized connector developed by ldbc, offering more configuration options and flexibility. ### Adding Dependencies First, add the necessary dependency. ```scala //> dep "io.github.takapi327::ldbc-connector:0.3.0-beta11" ``` ### Connection with Basic Configuration Let's start with the simplest configuration: ```scala import cats.effect.IO import ldbc.connector.* // Create a provider with basic configuration val provider = ConnectionProvider .default[IO]("localhost", 3306, "ldbc") .setPassword("password") .setDatabase("world") // Use the connection val program = provider.use { connection => connection.execute("SELECT 1") } ``` ### Connection with SSL Configuration You can add SSL configuration to establish a secure connection: ```scala import cats.effect.IO import ldbc.connector.* val provider = ConnectionProvider .default[IO]("localhost", 3306, "ldbc", "password", "world") .setSSL(SSL.Trusted) // Enable SSL connection // Use the connection val program = provider.use { connection => connection.execute("SELECT 1") } ``` ### Connection with Advanced Configuration You can leverage many more configuration options: ```scala import scala.concurrent.duration.* import cats.effect.IO import fs2.io.net.SocketOption import ldbc.connector.* val provider = ConnectionProvider .default[IO]("localhost", 3306, "ldbc") .setPassword("password") .setDatabase("world") .setDebug(true) .setSSL(SSL.None) .addSocketOption(SocketOption.receiveBufferSize(4096)) .setReadTimeout(30.seconds) .setAllowPublicKeyRetrieval(true) .setLogHandler(customLogHandler) // Set custom log handler // Use the connection val program = provider.use { connection => connection.execute("SELECT 1") } ``` ### Adding Before/After Processing If you want to execute specific processing after establishing a connection or before disconnecting, you can use the `withBefore` and `withAfter` methods: ```scala import cats.effect.IO import ldbc.connector.* val provider = ConnectionProvider .default[IO]("localhost", 3306, "ldbc", "password", "world") .withBefore { connection => // Processing executed after connection establishment connection.execute("SET time_zone = '+09:00'") } .withAfter { (result, connection) => // Processing executed before disconnection connection.execute("RESET time_zone") } // Use the connection val program = provider.use { connection => connection.execute("SELECT NOW()") } ``` ## List of Configurable Parameters The following parameters can be configured with the ldbc connector: | Property | Details | Required | |---------------------------|-----------------------------------------------------------------------|----| | `host` | `Database host information` | ✅ | | `port` | `Database port information` | ✅ | | `user` | `Database user information` | ✅ | | `password` | `Database password information (default: None)` | ❌ | | `database` | `Database name information (default: None)` | ❌ | | `debug` | `Whether to display debug information (default: false)` | ❌ | | `ssl` | `SSL configuration (default: SSL.None)` | ❌ | | `socketOptions` | `Specify socket options for TCP/UDP sockets (default: defaultSocketOptions)` | ❌ | | `readTimeout` | `Specify timeout duration (default: Duration.Inf)` | ❌ | | `allowPublicKeyRetrieval` | `Whether to retrieve public key (default: false)` | ❌ | | `logHandler` | `Log output configuration` | ❌ | | `before` | `Processing to execute after connection establishment` | ❌ | | `after` | `Processing to execute before disconnecting` | ❌ | | `tracer` | `Tracer configuration for metrics output (default: Tracer.noop)` | ❌ | ## Resource Management and Connection Usage ldbc manages connection lifecycles using cats-effect's Resource. You can use connections in the following two ways: ### use Method The `use` method is convenient for simple usage: ```scala val result = provider.use { connection => // Processing using the connection connection.execute("SELECT * FROM users") } ``` ### createConnection Method For more detailed resource management, use the `createConnection` method: ```scala 3 val program = for result <- provider.createConnection().use { connection => // Processing using the connection connection.execute("SELECT * FROM users") } // Other processing... yield result ``` Using these methods, you can perform database operations while safely managing the opening/closing of connections. # Custom Data Types Now that we've learned how to configure [Logging](/en/tutorial/Logging.md), let's look at how to support your own data types in ldbc to write more expressive code. This page explains how to perform database operations using not only basic types but also domain-specific types. In real applications, you often want to use domain-specific types rather than just simple basic types. For example, you might want to define custom types like `Status` or `Currency` and map them to database basic types (strings or integers). ldbc provides mechanisms to do this easily. This chapter explains how to use custom types or unsupported types in table definitions built with ldbc. Let's add a new column to the table definition created during setup. ```sql ALTER TABLE user ADD COLUMN status BOOLEAN NOT NULL DEFAULT TRUE; ``` ## Encoder In ldbc, values passed to statements are represented by `Encoder`. `Encoder` is a trait for representing values to be bound to statements. By implementing `Encoder`, you can represent values passed to statements with custom types. ### Basic Usage Let's add a `Status` to represent the user's status in the user information. Here's an example using an `enum`: ```scala 3 enum Status(val done: Boolean, val name: String): case Active extends Status(false, "Active") case InActive extends Status(true, "InActive") ``` In the code example below, we define an `Encoder` for a custom type. The `contramap` method is used to specify how to convert from the custom type to a basic type (in this case, `Boolean`): ```scala 3 given Encoder[Status] = Encoder[Boolean].contramap(_.done) ``` This allows you to bind custom types to statements. Here's a specific example: ```scala 3 val program1: DBIO[Int] = sql"INSERT INTO user (name, email, status) VALUES (${ "user 1" }, ${ "user@example.com" }, ${ Status.Active })".update ``` ### Composite Type Encoder Encoders can also create new types by composing multiple types. You can compose types using the `*:` operator: ```scala 3 val encoder: Encoder[(Int, String)] = Encoder[Int] *: Encoder[String] ``` Composed types can also be converted to arbitrary classes. In the example below, the `to` method is used to convert from a tuple to a case class: ```scala 3 case class Status(code: Int, name: String) given Encoder[Status] = (Encoder[Int] *: Encoder[String]).to[Status] ``` In this case, the fields of the `Status` class must correspond in order. That is, `code` corresponds to the `Int` encoder, and `name` corresponds to the `String` encoder. ## Decoder In addition to parameters, ldbc also provides `Decoder` for retrieving custom types from execution results. By implementing `Decoder`, you can retrieve custom types from statement execution results. ### Basic Usage The code example below shows how to use `Decoder` to convert a `Boolean` value to a `Status` type: ```scala 3 given Decoder[Status] = Decoder[Boolean].map { case true => Status.InActive case false => Status.Active } ``` This allows you to directly retrieve `Status` type values from query results: ```scala 3 val program2: DBIO[(String, String, Status)] = sql"SELECT name, email, status FROM user WHERE id = 1".query[(String, String, Status)].unsafe ``` ### Composite Type Decoder Decoders can also create new types by composing multiple types. Use the `*:` operator to compose types: ```scala 3 val decoder: Decoder[(Int, String)] = Decoder[Int] *: Decoder[String] ``` Composed types can also be converted to arbitrary classes: ```scala 3 case class Status(code: Int, name: String) given Decoder[Status] = (Decoder[Int] *: Decoder[String]).to[Status] ``` This definition allows two columns (integer and string) retrieved from the database to be automatically converted to instances of the `Status` class. ## Codec By using `Codec`, which combines `Encoder` and `Decoder`, you can use custom types for both values passed to statements and statement execution results. This reduces code duplication and achieves consistent type conversions. ### Basic Usage The code example below shows how to use `Codec` to integrate the `Encoder` and `Decoder` from earlier: ```scala 3 given Codec[Status] = Codec[Boolean].imap(_.done)(Status(_)) ``` ### Composite Type Codec Codecs can also create new types by composing multiple types: ```scala 3 val codec: Codec[(Int, String)] = Codec[Int] *: Codec[String] ``` Composed types can also be converted to arbitrary classes: ```scala 3 case class Status(code: Int, name: String) given Codec[Status] = (Codec[Int] *: Codec[String]).to[Status] ``` ### Retrieving Encoder and Decoder Individually Since Codec is a combination of `Encoder` and `Decoder`, you can also get the conversion process for each type individually: ```scala 3 val encoder: Encoder[Status] = Codec[Status].asEncoder val decoder: Decoder[Status] = Codec[Status].asDecoder ``` ## Converting Complex Object Structures Since `Codec`, `Encoder`, and `Decoder` can each be composed, complex object structures can be created by combining multiple types. This allows users to convert retrieved records into nested hierarchical data: ```scala 3 case class City(id: Int, name: String, countryCode: String) case class Country(code: String, name: String) case class CityWithCountry(city: City, country: Country) // Example of retrieving joined city and country information val program3: DBIO[List[CityWithCountry]] = sql""" SELECT c.id, c.name, c.country_code, co.code, co.name FROM city c JOIN country co ON c.country_code = co.code """.query[CityWithCountry].list ``` In the example above, `CityWithCountry` objects are automatically constructed from the query results. ldbc resolves types at compile time and generates appropriate encoders and decoders. ## Handling Large Objects Since Codec, along with `Encoder` and `Decoder`, is implicitly resolved, users do not usually need to explicitly specify these types. However, if there are many properties in a model, the implicit resolution process might become too complex and fail: ```shell [error] |Implicit search problem too large. [error] |an implicit search was terminated with failure after trying 100000 expressions. [error] |The root candidate for the search was: [error] | [error] | given instance given_Decoder_P in object Decoder for ldbc.dsl.codec.Decoder[City]} ``` In such cases, one of the following solutions is effective: 1. Increase the search limit in compile options: ```scala scalacOptions += "-Ximplicit-search-limit:100000" ``` However, this method may increase compilation time. 2. Manually construct explicit type conversions: ```scala 3 // Explicitly build Decoder given Decoder[City] = ( Decoder[Int] *: Decoder[String] *: Decoder[String] ).to[City] // Explicitly build Encoder given Encoder[City] = ( Encoder[Int] *: Encoder[String] *: Encoder[String] ).to[City] ``` 3. Use `Codec` to define both at once: ```scala 3 given Codec[City] = ( Codec[Int] *: Codec[String] *: Codec[String] ).to[City] ``` ## Practical Application Examples Below is an example of code using domain-specific types as a more practical example: ```scala 3 // Value representing an email address opaque type Email = String object Email: def apply(value: String): Email = value def unapply(email: Email): String = email // User ID opaque type UserId = Long object UserId: def apply(value: Long): UserId = value def unapply(userId: UserId): Long = userId // User class case class User(id: UserId, name: String, email: Email, status: Status) object User: // Codec for User ID given Codec[UserId] = Codec[Long].imap(UserId.apply)(_.value) // Codec for Email given Codec[Email] = Codec[String].imap(Email.apply)(Email.unapply) // Now you can retrieve and update users in a type-safe manner val getUser: DBIO[Option[User]] = sql"SELECT id, name, email, status FROM user WHERE id = ${UserId(1)}".query[User].option val updateEmail: DBIO[Int] = sql"UPDATE user SET email = ${Email("new@example.com")} WHERE id = ${UserId(1)}".update ``` ## Next Steps Now you know how to use custom data types with ldbc. By defining your own types, you can write more expressive and type-safe code. You can also accurately represent domain concepts in code, reducing the occurrence of bugs. Next, let's move on to [Query Builder](/en/tutorial/Query-Builder.md) to learn how to build type-safe queries without writing SQL directly. # Database Operations In [Updating Data](/en/tutorial/Updating-Data.md), you learned how to insert, update, and delete data. This page explains more advanced database operations such as transaction management and commit timing. Transaction management is important for safely performing database operations. ldbc provides abstractions for transaction management that adhere to the principles of functional programming. This section explains database operations. Before connecting to a database, you need to configure settings such as commit timing and read/write modes. ## Read-Only To start a read-only transaction, use the `readOnly` method. Using the `readOnly` method allows you to set query processing to read-only mode. You can use the `readOnly` method with `insert/update/delete` statements as well, but they will result in errors at execution time since they perform write operations. ```scala val read = sql"SELECT 1".query[Int].to[Option].readOnly(connection) ``` ## Writing To write to the database, use the `commit` method. Using the `commit` method configures your query processing to commit after each query execution. ```scala val write = sql"INSERT INTO `table`(`c1`, `c2`) VALUES ('column 1', 'column 2')".update.commit(connection) ``` ## Transactions To start a transaction, use the `transaction` method. Using the `transaction` method allows you to group multiple database connection operations into a single transaction. With ldbc, you'll organize database connection processes in the form of `DBIO[A]`. DBIO is a monad, so you can use for-comprehension to combine smaller programs into a larger program. ```scala 3 val program: DBIO[(List[Int], Option[Int], Int)] = for result1 <- sql"SELECT 1".query[Int].to[List] result2 <- sql"SELECT 2".query[Int].to[Option] result3 <- sql"SELECT 3".query[Int].unsafe yield (result1, result2, result3) ``` Once you have combined operations into a single `DBIO` program, you can process them in a single transaction using the `transaction` method. ```scala val transaction = program.transaction(connection) ``` ## Next Steps You've now learned all the basic uses of ldbc. You've acquired the knowledge necessary for everyday database operations, including database connections, query execution, reading and writing data, and transaction management. From here, we'll move on to more advanced topics. Let's start with [Error Handling](/en/tutorial/Error-Handling.md) and learn how to properly handle exceptions that may occur in database operations. # Error Handling In [Database Operations](/en/tutorial/Database-Operations.md), we learned about basic transaction management. However, in real applications, errors can occur for various reasons, such as database connection issues, SQL syntax errors, unique constraint violations, and more. This page explains how to properly handle errors in ldbc. ## Basics of Error Handling ldbc follows functional programming principles and integrates with Cats Effect's error handling capabilities. The `DBIO` monad has a `MonadThrow` instance, providing a standard functional interface for raising and handling exceptions. There are three main error handling methods you'll primarily use: - `raiseError`: Raise an exception - converts a `Throwable` to a `DBIO[A]` - `handleErrorWith`: Handle an exception - transforms a `DBIO[A]` to another `DBIO[A]` with an error handling function - `attempt`: Capture an exception - transforms a `DBIO[A]` to a `DBIO[Either[Throwable, A]]` ## Types of Exceptions in Database Operations Here are the main types of exceptions you might encounter when using ldbc: 1. **Connection Errors**: Occur when unable to connect to the database server (e.g., `ConnectException`) 2. **SQL Exceptions**: Errors during SQL execution (e.g., `SQLException`) - Syntax errors - Unique key constraint violations - Foreign key constraint violations - Timeouts 3. **Type Conversion Errors**: When database results can't be converted to the expected type (e.g., `ldbc.dsl.exception.DecodeFailureException`) ## Basic Usage of Error Handling ### Raising Exceptions (raiseError) To explicitly raise an exception under specific conditions, use `raiseError`: ```scala import cats.syntax.all.* import ldbc.dsl.* // Example of raising an error under specific conditions def validateUserId(id: Int): DBIO[Int] = if id <= 0 then DBIO.raiseError[Int](new IllegalArgumentException("ID must be a positive value")) else DBIO.pure(id) // Usage example val program: DBIO[String] = for id <- validateUserId(0) result <- sql"SELECT name FROM users WHERE id = $id".query[String].unsafe yield result // In this example, an error occurs because id is 0, and subsequent SQL isn't executed ``` ### Handling Errors (handleErrorWith) To handle errors that occur, use the `handleErrorWith` method: ```scala import ldbc.dsl.* import java.sql.SQLException // Error handling example val findUserById: DBIO[String] = for userId <- DBIO.pure(123) result <- sql"SELECT name FROM users WHERE id = $userId".query[String].unsafe.handleErrorWith { case e: SQLException if e.getMessage.contains("table 'users' doesn't exist") => // Fallback when table doesn't exist DBIO.pure("User table has not been created yet") case e: SQLException => // Handling other SQL errors DBIO.pure(s"Database error: ${e.getMessage}") case e: Throwable => // Handling other errors DBIO.pure(s"Unexpected error: ${e.getMessage}") } yield result ``` ### Capturing Exceptions (attempt) To capture errors as an `Either` type, use the `attempt` method: ```scala import cats.syntax.all.* import ldbc.dsl.* // Example of capturing exceptions using attempt val safeOperation: DBIO[String] = { val riskyOperation = sql"SELECT * FROM potentially_missing_table".query[String].unsafe riskyOperation.attempt.flatMap { case Right(result) => DBIO.pure(s"Operation successful: $result") case Left(error) => DBIO.pure(s"An error occurred: ${error.getMessage}") } } ``` ## Practical Error Handling Patterns Here are some error handling patterns that are useful in real applications. ### Implementing Retry Functionality An example of automatically retrying on temporary database connection errors: ```scala import scala.concurrent.duration.* import cats.effect.{IO, Sync} import cats.effect.syntax.all.* import cats.syntax.all.* import ldbc.dsl.* // Retry implementation example - combining IO and DBIO def retryWithBackoff[F[_]: Sync, A]( dbioOperation: DBIO[A], connection: Connection[F], maxRetries: Int = 3, initialDelay: FiniteDuration = 100.millis, maxDelay: FiniteDuration = 2.seconds ): F[A] = def retryLoop(remainingRetries: Int, delay: FiniteDuration): F[A] = // Convert DBIO to F type (e.g., IO) and execute dbioOperation.run(connection).handleErrorWith { error => if remainingRetries > 0 && isTransientError(error) then // For temporary errors, delay and retry val nextDelay = (delay * 2).min(maxDelay) Sync[F].sleep(delay) >> retryLoop(remainingRetries - 1, nextDelay) else // If max retries exceeded or permanent error, rethrow Sync[F].raiseError[A](error) } retryLoop(maxRetries, initialDelay) // Example with concrete IO type def retryDatabaseOperation[A]( operation: DBIO[A], connection: Connection[IO], maxRetries: Int = 3 ): IO[A] = retryWithBackoff(operation, connection, maxRetries) // Helper method to determine if an error is transient def isTransientError(error: Throwable): Boolean = error match case e: SQLException if e.getSQLState == "40001" => true // For deadlocks case e: SQLException if e.getSQLState == "08006" => true // For connection loss case e: Exception if e.getMessage.contains("connection reset") => true case _ => false // Usage example val query = sql"SELECT * FROM users".query[User].unsafe val result = retryDatabaseOperation(query, myConnection) ``` ### Error Handling with Custom Error Types and EitherT An example of defining application-specific error types for more detailed error handling: ```scala import cats.data.EitherT import cats.syntax.all.* import ldbc.dsl.* // Application-specific error types sealed trait AppDatabaseError case class UserNotFoundError(id: Int) extends AppDatabaseError case class DuplicateUserError(email: String) extends AppDatabaseError case class DatabaseConnectionError(cause: Throwable) extends AppDatabaseError case class UnexpectedDatabaseError(message: String, cause: Throwable) extends AppDatabaseError // User model case class User(id: Int, name: String, email: String) // Example of error handling with EitherT def findUserById(id: Int): EitherT[DBIO, AppDatabaseError, User] = val query = sql"SELECT id, name, email FROM users WHERE id = $id".query[User].to[Option] EitherT( query.attempt.map { case Right(user) => user.toRight(UserNotFoundError(id)) case Left(e: SQLException) if e.getMessage.contains("Connection refused") => Left(DatabaseConnectionError(e)) case Left(e) => Left(UnexpectedDatabaseError(e.getMessage, e)) } ) // Usage example val program = for user <- findUserById(123) // Other operations... yield user // Processing the result val result: DBIO[Either[AppDatabaseError, User]] = program.value // Final processing val finalResult: DBIO[String] = result.flatMap { case Right(user) => DBIO.pure(s"User found: ${user.name}") case Left(UserNotFoundError(id)) => DBIO.pure(s"User with ID ${id} does not exist") case Left(DatabaseConnectionError(_)) => DBIO.pure("A database connection error occurred") case Left(error) => DBIO.pure(s"Error: $error") } ``` ### Combining Transactions with Error Handling Example of error handling within a transaction: ```scala import cats.effect.IO import cats.syntax.all.* import ldbc.dsl.* import java.sql.Connection // Error handling within transactions def transferMoney(fromAccount: Int, toAccount: Int, amount: BigDecimal): DBIO[String] = val operation = for // Check balance of source account balance <- sql"SELECT balance FROM accounts WHERE id = $fromAccount".query[BigDecimal].unsafe _ <- if balance < amount then DBIO.raiseError[Unit](new IllegalStateException(s"Insufficient account balance: $balance < $amount")) else DBIO.pure(()) // Withdraw from source account _ <- sql"UPDATE accounts SET balance = balance - $amount WHERE id = $fromAccount".update.unsafe // Deposit to target account _ <- sql"UPDATE accounts SET balance = balance + $amount WHERE id = $toAccount".update.unsafe // Create transaction record _ <- sql"""INSERT INTO transactions (from_account, to_account, amount, timestamp) VALUES ($fromAccount, $toAccount, $amount, NOW())""".update.unsafe yield "Money transfer completed" // Wrap as transaction (automatically rolled back on error) operation.handleErrorWith { error => DBIO.pure(s"Transfer error: ${error.getMessage}") } // Usage example val fromAccount: Int = ??? val toAccount: Int = ??? val amount: BigDecimal = ??? provider.use { conn => transferMoney(fromAccount, toAccount, amount).transaction(conn) } ``` ## Summary Error handling is an important aspect of robust database applications. In ldbc, you can handle errors explicitly based on functional programming principles. The main points are: - Use `raiseError` to raise exceptions - Use `handleErrorWith` to handle exceptions - Use `attempt` to capture exceptions as `Either` - Define appropriate error types for application-specific error handling - Combine transactions with error handling to maintain data integrity By utilizing these techniques, you can implement database operations that are resilient to unexpected errors and easy to maintain. ## Next Steps Now that you understand error handling, move on to [Logging](/en/tutorial/Logging.md) to learn how to log query executions and errors. Logging is important for debugging and monitoring. # logging You learned how to handle errors in [Error Handling](/en/tutorial/Error-Handling.md). This page describes how to set up logging and log query execution and errors. Logging is an important part of understanding application behavior and diagnosing problems. ldbc provides a customizable logging system that can be integrated with any logging library. ldbc can export execution and error logs of database connections in any format using any logging library. By default, a logger using Cats Effect's Console is provided, which can be used during development. To customize logging using an arbitrary logging library, use `ldbc.sql.logging.LogHandler`. The following is the standard implementation of logging. ldbc generates the following three types of events on database connection. ```scala 3 def console[F[_]: Console: Sync]: LogHandler[F] = case LogEvent.Success(sql, args) => Console[F].println( s"""Successful Statement Execution: | $sql | | arguments = [${ args.mkString(",") }] |""".stripMargin ) case LogEvent.ProcessingFailure(sql, args, failure) => Console[F].errorln( s"""Failed ResultSet Processing: | $sql | | arguments = [${ args.mkString(",") }] |""".stripMargin ) >> Console[F].printStackTrace(failure) case LogEvent.ExecFailure(sql, args, failure) => Console[F].errorln( s"""Failed Statement Execution: | $sql | | arguments = [${ args.mkString(",") }] |""".stripMargin ) >> Console[F].printStackTrace(failure) ``` The created LogHandler can be passed as an argument to `setLogHandler` when creating a Provider to use any log output method. ```scala 3 import ldbc.connector.* val provider = ConnectionProvider .default[IO]("127.0.0.1", 3306, "ldbc", "password", "ldbc") .setSSL(SSL.Trusted) .setLogHandler(console[IO]) ``` ## Next Steps Now you know how to set up logging in ldbc. Setting up proper logging will make it easier to monitor application behavior and diagnose problems. Next, go to [Custom Data Types](/en/tutorial/Custom-Data-Type.md) to learn how to use your own data types in ldbc. # Parameterized Queries In the [Simple Program](/en/tutorial/Simple-Program.md) tutorial, we learned how to execute basic queries. In real applications, you'll often execute queries based on user input or variable values. On this page, you'll learn how to safely handle parameters. In ldbc, we strongly recommend using parameterized queries to prevent SQL injection attacks. Parameterized queries allow you to separate SQL code from data, enabling safer database access. ## Parameter Basics In ldbc, there are two main ways to embed parameters in SQL statements: 1. **Dynamic parameters** - Used as regular parameters, processed by `PreparedStatement` to prevent SQL injection attacks 2. **Static parameters** - Directly embedded as part of the SQL statement (e.g., table names, column names, etc.) ## Adding Dynamic Parameters First, let's create a query without parameters. ```scala sql"SELECT name, email FROM user".query[(String, String)].to[List] ``` Next, let's incorporate the query into a method and add a parameter to select only data matching the user-specified `id`. We insert the `id` argument into the SQL statement as `$id`, just like string interpolation. ```scala val id = 1 sql"SELECT name, email FROM user WHERE id = $id".query[(String, String)].to[List] ``` When we execute the query using a connection, it works without issues. ```scala provider.use { conn => sql"SELECT name, email FROM user WHERE id = $id" .query[(String, String)] .to[List] .readOnly(conn) } ``` What's happening here? It looks like we're just dropping string literals into an SQL string, but we're actually building a `PreparedStatement`, and the `id` value is ultimately set by a call to `setInt`. This protects our application from SQL injection attacks. You can use parameters of various types: ```scala val id: Int = 1 val name: String = "Alice" val active: Boolean = true val createdAt: LocalDateTime = LocalDateTime.now() sql"INSERT INTO user (id, name, active, created_at) VALUES ($id, $name, $active, $createdAt)" ``` In ldbc, appropriate encoders are provided for each type, safely converting Scala/Java values to SQL values. ## Multiple Parameters Multiple parameters can be used in the same way. ```scala val id = 1 val email = "alice@example.com" provider.use { conn => sql"SELECT name, email FROM user WHERE id = $id AND email > $email" .query[(String, String)] .to[List] .readOnly(conn) } ``` ## Combining Queries When building large queries, you can combine multiple SQL fragments. ```scala val baseQuery = sql"SELECT name, email FROM user" val whereClause = sql"WHERE id > $id" val orderClause = sql"ORDER BY name ASC" val query = baseQuery ++ whereClause ++ orderClause ``` ## SQL Helper Functions ldbc provides many helper functions for easily constructing complex SQL clauses. ### Handling IN Clauses A common challenge in SQL is using a series of values in an IN clause. In ldbc, this can be easily implemented using the `in` function. ```scala val ids = NonEmptyList.of(1, 2, 3) provider.use { conn => (sql"SELECT name, email FROM user WHERE " ++ in("id", ids)) .query[(String, String)] .to[List] .readOnly(conn) } ``` This is equivalent to the following SQL: ```sql SELECT name, email FROM user WHERE (id IN (?, ?, ?)) ``` Note that `ids` must be a `NonEmptyList` because an IN clause cannot be empty. ### Other Helper Functions ldbc provides many other convenient functions: #### Generating VALUES Clauses ```scala val users = NonEmptyList.of( (1, "Alice", "alice@example.com"), (2, "Bob", "bob@example.com") ) (sql"INSERT INTO user (id, name, email) " ++ values(users)) ``` #### WHERE Clause Conditions You can easily construct AND and OR conditions: ```scala val activeFilter = sql"active = true" val nameFilter = sql"name LIKE ${"A%"}" val emailFilter = sql"email IS NOT NULL" // WHERE (active = true) AND (name LIKE 'A%') AND (email IS NOT NULL) val query1 = sql"SELECT * FROM user " ++ whereAnd(activeFilter, nameFilter, emailFilter) // WHERE (active = true) OR (name LIKE 'A%') val query2 = sql"SELECT * FROM user " ++ whereOr(activeFilter, nameFilter) ``` #### Generating SET Clauses You can easily generate SET clauses for UPDATE statements: ```scala val name = "New Name" val email = "new@example.com" val updateValues = set( sql"name = $name", sql"email = $email", sql"updated_at = NOW()" ) sql"UPDATE user " ++ updateValues ++ sql" WHERE id = 1" ``` #### Generating ORDER BY Clauses ```scala val query = sql"SELECT * FROM user " ++ orderBy(sql"name ASC", sql"created_at DESC") ``` ### Optional Conditions When conditions are optional (may not exist), you can use functions with the `Opt` suffix: ```scala val nameOpt: Option[String] = Some("Alice") val emailOpt: Option[String] = None val nameFilter = nameOpt.map(name => sql"name = $name") val emailFilter = emailOpt.map(email => sql"email = $email") // Since nameFilter is Some(...) and emailFilter is None, the WHERE clause will only contain "name = ?" val query = sql"SELECT * FROM user " ++ whereAndOpt(nameFilter, emailFilter) ``` ## Static Parameters Sometimes you may want to parameterize structural parts of the SQL statement, such as column names or table names. In such cases, you can use "static parameters" which directly embed values into the SQL statement. While dynamic parameters (regular `$value`) are processed by `PreparedStatement` and replaced with `?` in the query string, static parameters are directly embedded as strings. To use static parameters, use the `sc` function: ```scala val column = "name" val table = "user" // Treating as a dynamic parameter would result in "SELECT ? FROM user" // sql"SELECT $column FROM user".query[String].to[List] // Treating as a static parameter results in "SELECT name FROM user" sql"SELECT ${sc(column)} FROM ${sc(table)}".query[String].to[List] ``` In this example, the generated SQL is `SELECT name FROM user`. > **Warning**: `sc(...)` does not escape the passed string, so passing unvalidated data such as user input directly poses a risk of SQL injection attacks. Use static parameters only from safe parts of your application (constants, configurations, etc.). Common use cases for static parameters: ```scala // Dynamic sort order val sortColumn = "created_at" val sortDirection = "DESC" sql"SELECT * FROM user ORDER BY ${sc(sortColumn)} ${sc(sortDirection)}" // Dynamic table selection val schema = "public" val table = "user" sql"SELECT * FROM ${sc(schema)}.${sc(table)}" ``` ## Next Steps Now you understand how to use parameterized queries. With the ability to handle parameters, you can build more complex and practical database queries. Next, proceed to [Selecting Data](/en/tutorial/Selecting-Data.md) to learn how to retrieve data in various formats. # Query Builder We've learned how to add custom types to ldbc in the [Custom Data Type](/en/tutorial/Custom-Data-Type.md) section. In this page, we'll explain how to build queries in a type-safe way without writing SQL directly. The Query Builder is a feature that allows you to construct database queries in a more type-safe manner than SQL string interpolation. This helps detect more errors at compile time and prevents mistakes related to query structure. In this chapter, we'll explain methods for building type-safe queries. ## Prerequisites You need to set up the following dependency in your project: ```scala //> using dep "io.github.takapi327::ldbc-query-builder:0.3.0-beta11" ``` ## Basic Usage In ldbc, we use case classes to represent tables and build queries. Let's start with a simple table definition: ```scala 3 import ldbc.dsl.codec.Codec import ldbc.query.builder.* // Table definition case class User(id: Int, name: String, email: String) derives Table object User: gicen Codec[User] = Codec.derived[User] ``` The `Table` trait is automatically derived using the `derives` keyword. This allows the class properties to be treated as database columns. To execute queries against the defined table, use `TableQuery`: ```scala // Build query against the table val query = TableQuery[User] .select(user => user.id *: user.name *: user.email) .where(_.email === "alice@example.com") ``` In the code above: - `TableQuery[User]` - Creates a query for the `User` table - `select(...)` - Specifies which columns to retrieve - `*:` - An operator for combining multiple columns - `where(...)` - Specifies the query condition ## Customizing Table Definitions ### Changing Column Names If a property name differs from the database column name, you can specify it using the `@Column` annotation: ```scala 3 case class User( id: Int, @Column("full_name") name: String, // name property maps to full_name column email: String ) derives Table ``` ### Changing Table Names By default, the class name is used as the table name, but you can explicitly specify a table name using `Table.derived`: ```scala 3 case class User(id: Int, name: String, email: String) object User: given Table[User] = Table.derived("users") // Specify "users" as the table name ``` ## Basic Query Operations ### SELECT #### Basic SELECT To retrieve specific columns only: ```scala val select = TableQuery[User].select(_.id) // SELECT id FROM user ``` To retrieve multiple columns, use the `*:` operator: ```scala val select = TableQuery[User].select(user => user.id *: user.name) // SELECT id, name FROM user ``` To retrieve all columns: ```scala val select = TableQuery[User].selectAll // SELECT id, name, email FROM user ``` #### Aggregate Functions How to use aggregate functions (e.g., count): ```scala val select = TableQuery[User].select(_.id.count) // SELECT COUNT(id) FROM user ``` ### WHERE Conditions To add conditions to a query, use the `where` method: ```scala val where = TableQuery[User].selectAll.where(_.email === "alice@example.com") // SELECT id, name, email FROM user WHERE email = ? ``` List of comparison operators available in the `where` method: | Operator | SQL Statement | Description | |----------------------------------------|---------------------------------------|--------------------------------------------------| | `===` | `column = ?` | Equal to | | `>=` | `column >= ?` | Greater than or equal to | | `>` | `column > ?` | Greater than | | `<=` | `column <= ?` | Less than or equal to | | `<` | `column < ?` | Less than | | `<>` | `column <> ?` | Not equal to | | `!==` | `column != ?` | Not equal to (alternative syntax) | | `IS ("TRUE"/"FALSE"/"UNKNOWN"/"NULL")` | `column IS {TRUE/FALSE/UNKNOWN/NULL}` | Check if equals specified value | | `<=>` | `column <=> ?` | NULL-safe equal operator (can compare with NULL) | | `IN (value, value, ...)` | `column IN (?, ?, ...)` | Check if matches any of the specified values | | `BETWEEN (start, end)` | `column BETWEEN ? AND ?` | Check if within specified range | | `LIKE (value)` | `column LIKE ?` | Pattern matching | | `LIKE_ESCAPE (like, escape)` | `column LIKE ? ESCAPE ?` | Pattern matching with escape character | | `REGEXP (value)` | `column REGEXP ?` | Regular expression | | `<<` (value) | `column << ?` | Bit left shift | | `>>` (value) | `column >> ?` | Bit right shift | | `DIV (cond, result)` | `column DIV ? = ?` | Integer division | | `MOD (cond, result)` | `column MOD ? = ?` | Modulo | | `^ (value)` | `column ^ ?` | Bit XOR | | `~ (value)` | `~column = ?` | Bit NOT | Example of combining conditions: ```scala val complexWhere = TableQuery[User] .selectAll .where(user => user.email === "alice@example.com" && user.id > 5) // SELECT id, name, email FROM user WHERE email = ? AND id > ? ``` ### GROUP BY and HAVING To group data, use the `groupBy` method: ```scala val select = TableQuery[User] .select(user => user.id.count *: user.name) .groupBy(_.name) // SELECT COUNT(id), name FROM user GROUP BY name ``` You can use `having` to set conditions on grouped data: ```scala val select = TableQuery[User] .select(user => user.id.count *: user.name) .groupBy(_.name) .having(_._1 > 1) // SELECT COUNT(id), name FROM user GROUP BY name HAVING COUNT(id) > ? ``` ### ORDER BY To specify result order, use the `orderBy` method: ```scala val select = TableQuery[User] .select(user => user.id *: user.name) .orderBy(_.id) // SELECT id, name FROM user ORDER BY id ``` To specify ascending or descending order: ```scala // Ascending order val selectAsc = TableQuery[User] .select(user => user.id *: user.name) .orderBy(_.id.asc) // SELECT id, name FROM user ORDER BY id ASC // Descending order val selectDesc = TableQuery[User] .select(user => user.id *: user.name) .orderBy(_.id.desc) // SELECT id, name FROM user ORDER BY id DESC // Sorting by multiple columns val selectMultiple = TableQuery[User] .select(user => user.id *: user.name) .orderBy(user => user.name.asc *: user.id.desc) // SELECT id, name FROM user ORDER BY name ASC, id DESC ``` ### LIMIT and OFFSET Use the `limit` method to limit the number of rows and the `offset` method to specify the number of rows to skip: ```scala val select = TableQuery[User] .select(user => user.id *: user.name) .limit(10) // Get a maximum of 10 rows .offset(5) // Skip the first 5 rows // SELECT id, name FROM user LIMIT ? OFFSET ? ``` ## Table Joins There are several ways to join multiple tables. First, let's show example table definitions: ```scala 3 // Table definitions case class User(id: Int, name: String, email: String) derives Table case class Product(id: Int, name: String, price: BigDecimal) derives Table case class Order( id: Int, userId: Int, productId: Int, orderDate: LocalDateTime, quantity: Int ) derives Table // Generate TableQuery val userTable = TableQuery[User] val productTable = TableQuery[Product] val orderTable = TableQuery[Order] ``` ### Inner Join Retrieves only the matching rows from both tables: ```scala val join = userTable .join(orderTable) .on((user, order) => user.id === order.userId) .select((user, order) => user.name *: order.quantity) // SELECT user.`name`, order.`quantity` FROM user JOIN order ON user.id = order.user_id ``` ### Left Join Retrieves all rows from the left table and matching rows from the right table. If no match, the right columns will be NULL: ```scala val leftJoin = userTable .leftJoin(orderTable) .on((user, order) => user.id === order.userId) .select((user, order) => user.name *: order.quantity) // SELECT user.`name`, order.`quantity` FROM user LEFT JOIN order ON user.id = order.user_id // The return type is (String, Option[Int]) // since data from the order table might be NULL ``` ### Right Join Retrieves all rows from the right table and matching rows from the left table. If no match, the left columns will be NULL: ```scala val rightJoin = orderTable .rightJoin(userTable) .on((order, user) => order.userId === user.id) .select((order, user) => order.quantity *: user.name) // SELECT order.`quantity`, user.`name` FROM order RIGHT JOIN user ON order.user_id = user.id // The return type is (Option[Int], String) // since data from the order table might be NULL ``` ### Joining Multiple Tables It's possible to join three or more tables: ```scala val multiJoin = productTable .join(orderTable).on((product, order) => product.id === order.productId) .rightJoin(userTable).on((_, order, user) => order.userId === user.id) .select((product, order, user) => product.name *: order.quantity *: user.name) // SELECT // product.`name`, // order.`quantity`, // user.`name` // FROM product // JOIN order ON product.id = order.product_id // RIGHT JOIN user ON order.user_id = user.id // The return type is (Option[String], Option[Int], String) // because rightJoin is used, data from product and order tables might be NULL ``` ## INSERT Statement There are several methods to insert new records: ### Using the `insert` Method Directly specify a tuple of values: ```scala val insert = TableQuery[User].insert((1, "Alice", "alice@example.com")) // INSERT INTO user (`id`, `name`, `email`) VALUES(?, ?, ?) // Insert multiple records at once val multiInsert = TableQuery[User].insert( (1, "Alice", "alice@example.com"), (2, "Bob", "bob@example.com") ) // INSERT INTO user (`id`, `name`, `email`) VALUES(?, ?, ?), (?, ?, ?) ``` ### Using the `insertInto` Method Useful when you want to insert values only into specific columns (e.g., when using AUTO INCREMENT): ```scala val insert = TableQuery[User] .insertInto(user => user.name *: user.email) .values(("Alice", "alice@example.com")) // INSERT INTO user (`name`, `email`) VALUES(?, ?) // Insert multiple records at once val multiInsert = TableQuery[User] .insertInto(user => user.name *: user.email) .values(List( ("Alice", "alice@example.com"), ("Bob", "bob@example.com") )) // INSERT INTO user (`name`, `email`) VALUES(?, ?), (?, ?) ``` ### Using Model Objects (with `+=` and `++=` Operators) ```scala // Insert one record val insert = TableQuery[User] += User(1, "Alice", "alice@example.com") // INSERT INTO user (`id`, `name`, `email`) VALUES(?, ?, ?) // Insert multiple records val multiInsert = TableQuery[User] ++= List( User(1, "Alice", "alice@example.com"), User(2, "Bob", "bob@example.com") ) // INSERT INTO user (`id`, `name`, `email`) VALUES(?, ?, ?), (?, ?, ?) ``` ### INSERT Using SELECT Results You can also insert results from a SELECT query into another table: ```scala val insertSelect = TableQuery[User] .insertInto(user => user.id *: user.name *: user.email) .select( TableQuery[User] .select(user => user.id *: user.name *: user.email) .where(_.id > 10) ) // INSERT INTO user (`id`, `name`, `email`) // SELECT id, name, email FROM user WHERE id > ? ``` ### ON DUPLICATE KEY UPDATE Using the ON DUPLICATE KEY UPDATE clause to update existing rows when a unique or primary key is duplicated: ```scala val insert = TableQuery[User] .insert((9, "Alice", "alice@example.com")) .onDuplicateKeyUpdate(v => v.name *: v.email) // INSERT INTO user (`id`, `name`, `email`) VALUES(?, ?, ?) // AS new_user ON DUPLICATE KEY UPDATE `name` = new_user.`name`, `email` = new_user.`email` // When updating only specific columns val insertWithSpecificUpdate = TableQuery[User] .insert((9, "Alice", "alice@example.com")) .onDuplicateKeyUpdate(_.name, "UpdatedName") // INSERT INTO user (`id`, `name`, `email`) VALUES(?, ?, ?) // AS new_user ON DUPLICATE KEY UPDATE `name` = ? ``` ## UPDATE Statement To update existing records: ### Updating a Single Column ```scala val update = TableQuery[User].update(_.name)("UpdatedName") // UPDATE user SET name = ? ``` ### Updating Multiple Columns ```scala val update = TableQuery[User] .update(u => u.name *: u.email)(("UpdatedName", "updated-email@example.com")) // UPDATE user SET name = ?, email = ? ``` ### Conditional Updates (Skip Updating Specific Columns Based on Conditions) ```scala val shouldUpdate = true // or false val update = TableQuery[User] .update(_.name)("UpdatedName") .set(_.email, "updated-email@example.com", shouldUpdate) // If shouldUpdate is true: UPDATE user SET name = ?, email = ? // If shouldUpdate is false: UPDATE user SET name = ? ``` ### Update Using Model Objects ```scala val update = TableQuery[User].update(User(1, "UpdatedName", "updated-email@example.com")) // UPDATE user SET id = ?, name = ?, email = ? ``` ### UPDATE with WHERE Conditions ```scala val update = TableQuery[User] .update(_.name, "UpdatedName") .where(_.id === 1) // UPDATE user SET name = ? WHERE id = ? // Adding AND condition val updateWithMultipleConditions = TableQuery[User] .update(_.name, "UpdatedName") .where(_.id === 1) .and(_.email === "alice@example.com") // UPDATE user SET name = ? WHERE id = ? AND email = ? // Adding OR condition val updateWithOrCondition = TableQuery[User] .update(_.name, "UpdatedName") .where(_.id === 1) .or(_.id === 2) // UPDATE user SET name = ? WHERE id = ? OR id = ? ``` ## DELETE Statement To delete records: ### Basic DELETE ```scala val delete = TableQuery[User].delete // DELETE FROM user ``` ### DELETE with WHERE Conditions ```scala val delete = TableQuery[User] .delete .where(_.id === 1) // DELETE FROM user WHERE id = ? // Adding AND/OR conditions val deleteWithMultipleConditions = TableQuery[User] .delete .where(_.id === 1) .and(_.email === "alice@example.com") // DELETE FROM user WHERE id = ? AND email = ? ``` ### DELETE with LIMIT To delete only a specific number of records: ```scala val delete = TableQuery[User] .delete .where(_.id > 10) .limit(5) // DELETE FROM user WHERE id > ? LIMIT ? ``` ## Advanced Query Examples ### Subqueries Example using a subquery: ```scala val subQuery = TableQuery[Order] .select(order => order.userId) .where(_.quantity > 10) val mainQuery = TableQuery[User] .select(user => user.name) .where(_.id IN subQuery) // SELECT name FROM user WHERE id IN (SELECT user_id FROM order WHERE quantity > ?) ``` ### Complex Joins and Conditions ```scala val complexQuery = userTable .join(orderTable).on((user, order) => user.id === order.userId) .join(productTable).on((_, order, product) => order.productId === product.id) .select((user, order, product) => user.name *: product.name *: order.quantity) .where { case ((user, order, product)) => (user.name LIKE "A%") && (product.price > 100) } .orderBy { case ((_, _, product)) => product.price.desc } .limit(10) // SELECT // user.`name`, // product.`name`, // order.`quantity` // FROM user // JOIN order ON user.id = order.user_id // JOIN product ON order.product_id = product.id // WHERE user.name LIKE ? AND product.price > ? // ORDER BY product.price DESC // LIMIT ? ``` ### Queries with Conditional Branching When you want to change the query based on runtime conditions: ```scala val nameOption: Option[String] = Some("Alice") // or None val minIdOption: Option[Int] = Some(5) // or None val query = TableQuery[User] .selectAll .whereOpt(user => nameOption.map(name => user.name === name)) .andOpt(user => minIdOption.map(minId => user.id >= minId)) // If both nameOption and minIdOption are Some: // SELECT id, name, email FROM user WHERE name = ? AND id >= ? // If nameOption is None and minIdOption is Some: // SELECT id, name, email FROM user WHERE id >= ? // If both are None: // SELECT id, name, email FROM user ``` ## Executing Queries Execute the constructed queries as follows: ```scala 3 import ldbc.dsl.* provider.use { conn => (for // Execute a SELECT query users <- TableQuery[User].selectAll.where(_.id > 5).query.to[List] // Get results as a List // Get a single result user <- TableQuery[User].selectAll.where(_.id === 1).query.to[Option] // Execute an update query _ <- TableQuery[User].update(_.name)("NewName").where(_.id === 1).update yield ???).transaction(conn) } ``` ## Next Steps Now you know how to build type-safe queries using the Query Builder. This approach allows you to detect more errors at compile time than writing SQL directly, enabling you to write safer code. Next, proceed to [Schema](/en/tutorial/Schema.md) to learn how to define database schemas using Scala code. # Schema Code Generation In [Schema](/en/tutorial/Schema.md), we learned how to define schemas using Scala code. However, when working with existing databases, manually defining schemas can be time-consuming and prone to errors. This page explains how to automatically generate Scala code from existing SQL files. Code generation is a powerful tool for automating repetitive tasks and reducing human errors. ldbc provides functionality to automatically generate model classes and table definitions from SQL files. ## SBT Plugin Setup ### Adding the Plugin You need to set up the following dependency in your project. Add it to `project/plugins.sbt`. ```scala addSbtPlugin("io.github.takapi327" % "ldbc-plugin" % "0.3.0-beta11") ``` ### Enabling the Plugin Enable the plugin for your project in the `build.sbt` file. ```sbt lazy val root = (project in file(".")) .enablePlugins(Ldbc) ``` ## Basic Usage ### Specifying SQL Files Configure the SQL files to be parsed. You can specify a single file or multiple files. ```sbt // Specifying a single SQL file Compile / parseFiles := List( baseDirectory.value / "sql" / "schema.sql" ) // Specifying multiple SQL files Compile / parseFiles := List( baseDirectory.value / "sql" / "users.sql", baseDirectory.value / "sql" / "products.sql" ) ``` ### Specifying Directories To target all SQL files in a specific directory, use `parseDirectories`. ```sbt // Specify by directory Compile / parseDirectories := List( baseDirectory.value / "sql" ) ``` ### Generated Code After configuration, code will be automatically generated when you compile with sbt. ```shell sbt compile ``` The generated files are stored in the `target/scala-X.X/src_managed/main` directory. ### Manual Generation If you want to force code generation without using cache, use the following command: ```shell sbt generateBySchema ``` ## SQL File Format Requirements SQL files must include the following elements: ### Database Definition At the beginning of the file, always include a Create or Use statement for the database. This determines the package name and table ownership in the generated code. ```sql -- Method 1: Creating a database CREATE DATABASE `my_app`; -- Or Method 2: Using an existing database USE `my_app`; ``` ### Table Definitions After the database definition, include table definitions. ```sql DROP TABLE IF EXISTS `users`; CREATE TABLE `users` ( `id` BIGINT AUTO_INCREMENT PRIMARY KEY, `name` VARCHAR(255) NOT NULL, `email` VARCHAR(255) NOT NULL UNIQUE, `created_at` TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP ); ``` Complete SQL file example: ```sql CREATE DATABASE `my_app`; DROP TABLE IF EXISTS `users`; CREATE TABLE `users` ( `id` BIGINT AUTO_INCREMENT PRIMARY KEY, `name` VARCHAR(255) NOT NULL, `email` VARCHAR(255) NOT NULL UNIQUE, `created_at` TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP ); DROP TABLE IF EXISTS `products`; CREATE TABLE `products` ( `id` BIGINT AUTO_INCREMENT PRIMARY KEY, `name` VARCHAR(255) NOT NULL, `price` DECIMAL(10, 2) NOT NULL, `created_at` TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP ); ``` ## Detailed Configuration Options The ldbc plugin offers the following configuration keys to customize code generation. ### Configuration Key List | Key | Default Value | Details | |----------------------|-----------------|--------------------------------------------------------| | `parseFiles` | `List.empty` | List of SQL files to parse | | `parseDirectories` | `List.empty` | Directory-based specification of SQL files to parse | | `excludeFiles` | `List.empty` | List of filenames to exclude from parsing | | `customYamlFiles` | `List.empty` | List of YAML files for type customization | | `classNameFormat` | `Format.PASCAL` | Format for generated class names (PASCAL, CAMEL, SNAKE)| | `propertyNameFormat` | `Format.CAMEL` | Format for generated property names (PASCAL, CAMEL, SNAKE)| | `ldbcPackage` | `ldbc.generated`| Package name for generated files | ### Example: Detailed Configuration ```sbt Compile / parseFiles := List( baseDirectory.value / "sql" / "schema.sql" ) Compile / parseDirectories := List( baseDirectory.value / "sql" / "tables" ) Compile / excludeFiles := List( "temp_tables.sql", "test_data.sql" ) Compile / classNameFormat := PASCAL // PascalCase (MyClass) Compile / propertyNameFormat := CAMEL // camelCase (myProperty) Compile / ldbcPackage := "com.example.db" ``` ## Example of Generated Code For example, with an SQL file like: ```sql CREATE DATABASE `shop`; CREATE TABLE `products` ( `id` BIGINT AUTO_INCREMENT PRIMARY KEY, `name` VARCHAR(255) NOT NULL, `price` DECIMAL(10, 2) NOT NULL, `description` TEXT, `category_id` INT NOT NULL, `created_at` TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP ); ``` The following Scala code would be generated: ```scala package com.example.db import ldbc.schema.* import java.time.LocalDateTime // Model class case class Product( id: Long, name: String, price: BigDecimal, description: Option[String], categoryId: Int, createdAt: LocalDateTime ) // Table definition and query builder object Product { val table = TableQuery[ProductTable] class ProductTable extends Table[Product]("products"): def id: Column[Long] = column[Long]("id", BIGINT, AUTO_INCREMENT, PRIMARY_KEY) def name: Column[String] = column[String]("name", VARCHAR(255), NOT_NULL) def price: Column[BigDecimal] = column[BigDecimal]("price", DECIMAL(10, 2), NOT_NULL) def description: Column[Option[String]] = column[Option[String]]("description", TEXT) def categoryId: Column[Int] = column[Int]("category_id", INT, NOT_NULL) def createdAt: Column[LocalDateTime] = column[LocalDateTime]("created_at", TIMESTAMP, NOT_NULL, DEFAULT_CURRENT_TIMESTAMP) override def * : Column[Product] = (id *: name *: price *: description *: categoryId *: createdAt).to[Product] } ``` ## Customizing Types If you want to change the types in the automatically generated code to your own types, you can customize them using YAML files. ### YAML File Configuration First, create a YAML file for customization. ```yaml # custom_types.yml database: name: 'shop' tables: - name: 'products' columns: - name: 'category_id' type: 'ProductCategory' object: extends: - 'com.example.ProductTypeMapping' ``` Then, add this YAML file to your project configuration. ```sbt Compile / customYamlFiles := List( baseDirectory.value / "config" / "custom_types.yml" ) ``` ### Custom Type Implementation Next, implement the custom type conversion referenced in the YAML file. ```scala // com/example/ProductTypeMapping.scala package com.example import ldbc.dsl.Codec sealed trait ProductCategory { def id: Int } object ProductCategory { case object Electronics extends ProductCategory { val id = 1 } case object Books extends ProductCategory { val id = 2 } case object Clothing extends ProductCategory { val id = 3 } def fromId(id: Int): ProductCategory = id match { case 1 => Electronics case 2 => Books case 3 => Clothing case _ => throw new IllegalArgumentException(s"Unknown category ID: $id") } } trait ProductTypeMapping { given Codec[ProductCategory] = Codec[Int].imap(ProductCategory.fromId)(_.id) } ``` ### Generated Code After Customization With the above configuration, code like the following would be generated: ```scala package ldbc.generated.shop import ldbc.schema.* import java.time.LocalDateTime import com.example.ProductCategory case class Product( id: Long, name: String, price: BigDecimal, description: Option[String], categoryId: ProductCategory, // Changed to custom type createdAt: LocalDateTime ) object Product extends com.example.ProductTypeMapping { val table = TableQuery[ProductTable] class ProductTable extends Table[Product]("products"): def id: Column[Long] = column[Long]("id", BIGINT, AUTO_INCREMENT, PRIMARY_KEY) def name: Column[String] = column[String]("name", VARCHAR(255), NOT_NULL) def price: Column[BigDecimal] = column[BigDecimal]("price", DECIMAL(10, 2), NOT_NULL) def description: Column[Option[String]] = column[Option[String]]("description", TEXT) def categoryId: Column[Int] = column[Int]("category_id", INT, NOT_NULL) // Actual column type doesn't change def createdAt: Column[LocalDateTime] = column[LocalDateTime]("created_at", TIMESTAMP, NOT_NULL, DEFAULT_CURRENT_TIMESTAMP) override def * : Column[Product] = (id *: name *: price *: description *: categoryId *: createdAt).to[Product] } ``` ## Detailed YAML Customization Syntax In the customization YAML file, the following configurations are possible: ```yaml database: name: '{database_name}' tables: - name: '{table_name}' columns: # Optional - name: '{column_name}' type: '{desired_Scala_type}' class: # Optional extends: - '{package_path_of_trait_to_extend_the_model_class}' object: # Optional extends: - '{package_path_of_trait_to_extend_the_object}' ``` ### Example: Adding Traits to a Model Class ```yaml database: name: 'shop' tables: - name: 'products' class: extends: - 'com.example.JsonSerializable' - 'com.example.Validatable' ``` ### Example: Customizing Multiple Tables and Columns ```yaml database: name: 'shop' tables: - name: 'products' columns: - name: 'price' type: 'Money' object: extends: - 'com.example.MoneyTypeMapping' - name: 'orders' columns: - name: 'status' type: 'OrderStatus' object: extends: - 'com.example.OrderStatusMapping' ``` ## Using Generated Code Generated code can be used like any other ldbc code. ```scala import ldbc.dsl.* import ldbc.generated.shop.Product val provider = MySQLConnectionProvider(...) // Referencing table queries val products = Product.table // Executing queries val allProducts = provider.use { conn => products.filter(_.price > 100).all.run(conn) } ``` ## Best Practices for Code Generation ### 1. Clear SQL File Structure - Group related tables in the same file - Always include database definition at the beginning of each file - Add appropriate comments to explain SQL ### 2. Consistent Naming Conventions - Use consistent naming conventions for tables and columns in SQL - Explicitly configure naming rules for the generated Scala code ### 3. Smart Use of Custom Types - Use custom types for domain-specific concepts - Leverage custom types to encapsulate complex business logic ### 4. Automate Regeneration Consider integrating into CI/CD pipelines for regular schema updates. ## Troubleshooting ### When Code Is Not Generated - Verify SQL file paths are correct - Ensure database definition is at the beginning of the SQL files - Check for SQL syntax errors ### When Type Conversion Errors Occur - Verify custom YAML configurations are correct - Ensure referenced packages and classes exist in the classpath - Check that implicit type conversions (given/using) are properly defined ### When Generated Code Has Issues - Don't manually modify; instead, fix the SQL or YAML files and regenerate - Check for unsupported SQL features or special types ## Tutorial Completion Congratulations! You've completed all sections of the ldbc tutorial. Now you have the basic skills and knowledge to develop database applications using ldbc. Throughout this journey, you've learned: - Basic usage and setup of ldbc - Database connections and query execution - Reading and writing data with type-safe mapping - Transaction management and error handling - Advanced features (logging, custom data types, query builders) - Schema definition and code generation Use this knowledge to build type-safe and efficient database applications. For more information and updates, refer to the official documentation and GitHub repository. Happy coding with ldbc! # Schema You've learned how to build type-safe queries with the [Query Builder](/en/tutorial/Query-Builder.md). This page explains how to define database schemas in Scala code and map tables to models. Schema definitions are a critical element that clearly defines the boundary between your application and the database. ldbc provides functionality to define schemas in Scala code, leveraging the powerful type system to represent database structures. This chapter explains how to handle database schemas in Scala code, particularly how to manually write schemas, which is useful when starting to write an application without an existing database. If you already have schemas in your database, you can skip this work by using the Code Generator. ## Preparation You need to set up the following dependency in your project: ```scala //> using dep "io.github.takapi327::ldbc-schema:0.3.0-beta11" ``` The following code examples assume these imports: ```scala 3 import ldbc.schema.* ``` ## Basic Table Definition In ldbc, you create table definitions by extending the `Table` class. This allows you to associate Scala models (such as case classes) with database tables. ### Basic Table Definition ```scala 3 // Model definition case class User( id: Long, name: String, age: Option[Int] // Use Option for columns that allow NULL ) // Table definition class UserTable extends Table[User]("user"): // "user" is the table name // Column definitions def id: Column[Long] = column[Long]("id") def name: Column[String] = column[String]("name") def age: Column[Option[Int]] = column[Option[Int]]("age") // Mapping with the model override def * : Column[User] = (id *: name *: age).to[User] ``` In the example above: - `Table[User]` indicates that this table is associated with the User model - `"user"` is the table name in the database - Each column is defined with the `column` method - The `*` method defines how to map all columns to the model ### Table Definition with Data Types You can specify MySQL data types and attributes for columns: ```scala 3 class UserTable extends Table[User]("user"): // Column definitions with data types and attributes def id: Column[Long] = column[Long]("id", BIGINT, AUTO_INCREMENT, PRIMARY_KEY) def name: Column[String] = column[String]("name", VARCHAR(255)) def age: Column[Option[Int]] = column[Option[Int]]("age", INT) override def * : Column[User] = (id *: name *: age).to[User] ``` ## Using Dedicated Column Definition Methods ldbc also provides specialized column definition methods for each data type. Because the variable name is used as the column name, you can write code more simply. ```scala 3 class UserTable extends Table[User]("user"): def id: Column[Long] = bigint().autoIncrement.primaryKey def name: Column[String] = varchar(255) def age: Column[Option[Int]] = int().defaultNull override def * : Column[User] = (id *: name *: age).to[User] ``` Using dedicated column definition methods allows you to set attributes appropriate for that data type, enabling you to write more type-safe code. ### Explicitly Specifying Column Names If you want to explicitly specify column names: ```scala 3 class UserTable extends Table[User]("user"): def id: Column[Long] = bigint("user_id").autoIncrement.primaryKey def name: Column[String] = varchar("user_name", 255) def age: Column[Option[Int]] = int("user_age").defaultNull override def * : Column[User] = (id *: name *: age).to[User] ``` ### Setting Column Naming Conventions You can change the naming convention for columns using `Naming`: ```scala 3 class UserTable extends Table[User]("user"): // Convert column names to Pascal case (e.g., userId → UserId) given Naming = Naming.PASCAL def userId: Column[Long] = bigint().autoIncrement.primaryKey def userName: Column[String] = varchar(255) def userAge: Column[Option[Int]] = int().defaultNull override def * : Column[User] = (userId *: userName *: userAge).to[User] ``` Available naming conventions: - `Naming.SNAKE` (default): Snake case (e.g., user_id) - `Naming.CAMEL`: Camel case (e.g., userId) - `Naming.PASCAL`: Pascal case (e.g., UserId) ## Numeric Column Definitions For numeric columns, the following operations are possible: ### Integer Types ```scala 3 def id: Column[Long] = bigint().autoIncrement.primaryKey def count: Column[Int] = int().unsigned.default(0) // Set as unsigned, default value 0 def smallValue: Column[Short] = smallint().unsigned ``` ### Decimal Types ```scala 3 def price: Column[BigDecimal] = decimal(10, 2) // 10 digits in total, 2 decimal places def rating: Column[Double] = double(5) // Double-precision floating point def score: Column[Float] = float(4) // Single-precision floating point ``` ## String Column Definitions For string columns, the following operations are possible: ```scala 3 def name: Column[String] = varchar(255) // Variable-length string (max 255 characters) def code: Column[String] = char(5) // Fixed-length string (5 characters) def description: Column[String] = text() // Text type def content: Column[String] = longtext() // Long text type // Specifying character set def japaneseText: Column[String] = text().charset(Character.utf8mb4) // Specifying collation def sortableText: Column[String] = varchar(255) .charset(Character.utf8mb4) .collate(Collate.utf8mb4_unicode_ci) ``` ## Binary Column Definitions Defining columns for binary data: ```scala 3 def data: Column[Array[Byte]] = binary(255) // Fixed-length binary def flexData: Column[Array[Byte]] = varbinary(1000) // Variable-length binary def largeData: Column[Array[Byte]] = blob() // Binary Large Object ``` ## Date and Time Column Definitions Defining columns for dates and times: ```scala 3 def birthDate: Column[LocalDate] = date() // Date only def createdAt: Column[LocalDateTime] = datetime() // Date and time def updatedAt: Column[LocalDateTime] = timestamp() .defaultCurrentTimestamp(onUpdate = true) // Auto-update on creation and modification def startTime: Column[LocalTime] = time() // Time only def fiscalYear: Column[Int] = year() // Year only ``` ## ENUM Type and Special Data Types Example of using ENUM type: ```scala 3 // ENUM definition enum UserStatus extends Enum: case Active, Inactive, Suspended object UserStatus extends EnumDataType[UserStatus] // Using ENUM in table definition class UserTable extends Table[User]("user"): // ... def status: Column[UserStatus] = `enum`[UserStatus]("status") ``` Other special data types: ```scala 3 def isActive: Column[Boolean] = boolean() // BOOLEAN type def uniqueId: Column[BigInt] = serial() // SERIAL type (auto-increment BIGINT UNSIGNED) ``` ## Setting Default Values How to set default values for columns: ```scala 3 def score: Column[Int] = int().default(100) // Fixed value def updatedAt: Column[LocalDateTime] = timestamp() .defaultCurrentTimestamp() // Current timestamp def createdDate: Column[LocalDate] = date() .defaultCurrentDate // Current date def nullableField: Column[Option[String]] = varchar(255) .defaultNull // NULL value ``` ## Primary Keys, Foreign Keys, and Indexes ### Single-Column Primary Key ```scala 3 def id: Column[Long] = bigint().autoIncrement.primaryKey ``` ### Composite Primary Key Definition ```scala 3 class OrderItemTable extends Table[OrderItem]("order_item"): def orderId: Column[Int] = int() def itemId: Column[Int] = int() def quantity: Column[Int] = int().default(1) // Composite primary key definition override def keys = List( PRIMARY_KEY(orderId *: itemId) ) override def * : Column[OrderItem] = (orderId *: itemId *: quantity).to[OrderItem] ``` ### Index Definition ```scala 3 class UserTable extends Table[User]("user"): // ...column definitions... // Index definitions override def keys = List( INDEX_KEY("idx_user_name", name), // Named index UNIQUE_KEY("idx_user_email", email) // Unique index ) ``` You can also specify index types: ```scala 3 override def keys = List( INDEX_KEY( Some("idx_name"), Some(Index.Type.BTREE), // Can specify BTREE or HASH index type None, name ) ) ``` ### Foreign Key Definition To define foreign keys, first create a TableQuery for the referenced table: ```scala 3 // Referenced table val userTable = TableQuery[UserTable] // Referencing table class ProfileTable extends Table[Profile]("profile"): def id: Column[Long] = bigint().autoIncrement.primaryKey def userId: Column[Long] = bigint() // ...other columns... // Foreign key definition def fkUser = FOREIGN_KEY( "fk_profile_user", // Foreign key name userId, // Referencing column REFERENCE(userTable)(_.id) // Referenced table and column .onDelete(Reference.ReferenceOption.CASCADE) // Action on delete .onUpdate(Reference.ReferenceOption.RESTRICT) // Action on update ) override def keys = List( PRIMARY_KEY(id), fkUser // Add foreign key ) ``` Reference constraint options (`ReferenceOption`): - `RESTRICT`: Prevents changes to parent record as long as child records exist - `CASCADE`: Changes child records along with parent record changes - `SET_NULL`: Sets relevant columns in child records to NULL when parent record changes - `NO_ACTION`: Delays constraint checking (basically the same as RESTRICT) - `SET_DEFAULT`: Sets relevant columns in child records to default values when parent record changes ## Setting Constraints If you want to define constraints with specific naming conventions, you can use `CONSTRAINT`: ```scala 3 override def keys = List( CONSTRAINT( "pk_user", // Constraint name PRIMARY_KEY(id) // Constraint type ), CONSTRAINT( "fk_user_department", FOREIGN_KEY(departmentId, REFERENCE(departmentTable)(_.id)) ) ) ``` ## Complex Mapping with Models ### Mapping Nested Models ```scala 3 case class User( id: Long, name: UserName, // Nested type contact: Contact // Nested type ) case class UserName(first: String, last: String) case class Contact(email: String, phone: Option[String]) class UserTable extends Table[User]("user"): def id: Column[Long] = bigint().autoIncrement.primaryKey def firstName: Column[String] = varchar(50) def lastName: Column[String] = varchar(50) def email: Column[String] = varchar(100) def phone: Column[Option[String]] = varchar(20).defaultNull // Nested value mapping def userName: Column[UserName] = (firstName *: lastName).to[UserName] def contact: Column[Contact] = (email *: phone).to[Contact] override def * : Column[User] = (id *: userName *: contact).to[User] ``` This configuration will generate the following SQL: ```sql CREATE TABLE `user` ( `id` BIGINT NOT NULL AUTO_INCREMENT, `firstName` VARCHAR(50) NOT NULL, `lastName` VARCHAR(50) NOT NULL, `email` VARCHAR(100) NOT NULL, `phone` VARCHAR(20) NULL, PRIMARY KEY (`id`) ) ``` ## Schema Generation and DDL Execution Generate DDL (Data Definition Language) from table definitions and create schemas in the database. ### Generating TableQuery ```scala 3 val users = TableQuery[UserTable] val profiles = TableQuery[ProfileTable] val orders = TableQuery[OrderTable] ``` ### Generating and Executing Schema ```scala 3 import ldbc.dsl.* // Combining schemas val schema = users.schema ++ profiles.schema ++ orders.schema // Apply schema using database connection provider.use { conn => DBIO.sequence( // Create tables (only if they don't exist) schema.createIfNotExists, // Other operations such as data insertion... ).commit(conn) } ``` ### DDL Operations ```scala 3 val userSchema = users.schema // Various DDL operations userSchema.create // Create table userSchema.createIfNotExists // Create table only if it doesn't exist userSchema.drop // Drop table userSchema.dropIfExists // Drop table only if it exists userSchema.truncate // Delete all data in the table ``` ### Checking DDL Statements How to check the actual SQL that will be executed: ```scala 3 // Check creation query userSchema.create.statements.foreach(println) // Check conditional creation query userSchema.createIfNotExists.statements.foreach(println) // Check drop query userSchema.drop.statements.foreach(println) // Check conditional drop query userSchema.dropIfExists.statements.foreach(println) // Check truncate query userSchema.truncate.statements.foreach(println) ``` ## Setting Column Attributes Various attributes can be set for columns: ```scala 3 def id: Column[Long] = bigint() .autoIncrement // Auto increment .primaryKey // Primary key .comment("User ID") // Comment def email: Column[String] = varchar(255) .unique // Unique constraint .comment("Email address") def status: Column[String] = varchar(20) .charset(Character.utf8mb4) // Character set .collate(Collate.utf8mb4_unicode_ci) // Collation def hiddenField: Column[String] = varchar(100) .invisible // Invisible attribute (not retrieved with SELECT *) def formatField: Column[String] = varchar(100) .setAttributes(COLUMN_FORMAT.DYNAMIC[String]) // Column storage format def storageField: Column[Array[Byte]] = blob() .setAttributes(STORAGE.DISK[Array[Byte]]) // Storage type ``` ## Summary Using the schema module of ldbc allows you to define safe and expressive database schemas by leveraging Scala's type system. Key features: - Strong type safety: Detect schema issues at compile time - Rich data type support: Supports all MySQL data types - Flexible model mapping: Handles everything from simple case classes to complex nested models - DDL generation: Generate SQL directly from table definitions - Extensibility: Supports custom data types and mapping functions ## Next Steps Now you understand how to define schemas in Scala code. Manually defining schemas allows you to closely integrate your application and database structures. Next, proceed to [Schema Code Generation](/en/tutorial/Schema-Code-Generation.md) to learn how to automatically generate schema code from existing SQL files. # Selecting Data Now that we've learned about [Parameterized Queries](/en/tutorial/Parameterized-Queries.md), let's look at how to retrieve data in various formats. This page explains how to efficiently fetch data using SELECT queries and map them to Scala types. One of ldbc's most powerful features is its ability to easily map database results to Scala types. It can handle various data formats, from simple primitive types to complex case classes. ## Basic Data Retrieval Workflow The basic flow for retrieving data with ldbc is as follows: 1. Create an SQL query with the `sql` interpolator 2. Specify the result type with `.query[T]` 3. Convert results to a collection with `.to[Collection]` (optional) 4. Execute the query using `.readOnly()`/`.commit()`/`.transaction()`, etc. 5. Process the results Let's look at this flow along with the type transformations in code. ## Reading Rows into Collections In our first query, let's look at an example that retrieves some usernames into a list and outputs them. We'll show how the types change at each step: ```scala sql"SELECT name FROM user" .query[String] // Query[String] .to[List] // DBIO[List[String]] .readOnly(conn) // IO[List[String]] .unsafeRunSync() // List[String] .foreach(println) // Unit ``` Let's explain this code in detail: - `sql"SELECT name FROM user"` - Defines the SQL query. - `.query[String]` - Maps each row result to a `String` type. This generates a `Query[String]` type. - `.to[List]` - Aggregates the results into a `List`. This generates a `DBIO[List[String]]` type. This method can be used with any collection type that implements `FactoryCompat` (such as `List`, `Vector`, `Set`, etc.). - `.readOnly(conn)` - Executes the query using the connection in read-only mode. The return value is `IO[List[String]]`. - `.unsafeRunSync()` - Executes the IO monad to get the actual result (`List[String]`). - `.foreach(println)` - Outputs each element of the result. ## Multiple Column Queries Of course, you can select multiple columns and map them to tuples: ```scala sql"SELECT name, email FROM user" .query[(String, String)] // Query[(String, String)] .to[List] // DBIO[List[(String, String)]] .readOnly(conn) // IO[List[(String, String)]] .unsafeRunSync() // List[(String, String)] .foreach { case (name, email) => println(s"Name: $name, Email: $email") } ``` In multiple column queries, it's important that the order of selected columns matches the order of type parameters in the tuple. In the above example, `name` corresponds to the 1st column (tuple's `_1`), and `email` to the 2nd column (tuple's `_2`). ## Mapping to Case Classes While tuples are convenient, we recommend using case classes to improve code readability. ldbc can automatically map query results to case classes: ```scala // Case class representing user information case class User(id: Long, name: String, email: String) // Query execution and mapping sql"SELECT id, name, email FROM user" .query[User] // Query[User] .to[List] // DBIO[List[User]] .readOnly(conn) // IO[List[User]] .unsafeRunSync() // List[User] .foreach(user => println(s"ID: ${user.id}, Name: ${user.name}, Email: ${user.email}")) ``` **Important**: The field names in the case class need to match the column names selected in the SQL query. The order also needs to match, but ldbc will map appropriately if the names match exactly. ![Selecting Data](../../img/data_select.png) ## Joining Multiple Tables and Nested Case Classes When retrieving data from multiple tables using `JOIN`, you can map to nested case class structures. In the following example, we join the `city` and `country` tables and map the result to a `CityWithCountry` class: ```scala // Case class representing a city case class City(id: Long, name: String) // Case class representing a country case class Country(code: String, name: String, region: String) // Case class combining city and country information case class CityWithCountry(city: City, country: Country) // Executing a join query sql""" SELECT city.id, city.name, country.code, country.name, country.region FROM city JOIN country ON city.country_code = country.code """ .query[CityWithCountry] // Query[CityWithCountry] .to[List] // DBIO[List[CityWithCountry]] .readOnly(conn) // IO[List[CityWithCountry]] .unsafeRunSync() // List[CityWithCountry] .foreach(cityWithCountry => println( s"City: ${cityWithCountry.city.name}, Country: ${cityWithCountry.country.name}" )) ``` A feature of ldbc is that columns specified in the format `table_name.column_name` are automatically mapped to `class_name.field_name`. This enables the following mappings in the above example: - `city.id` → `CityWithCountry.city.id` - `city.name` → `CityWithCountry.city.name` - `country.code` → `CityWithCountry.country.code` - `country.name` → `CityWithCountry.country.name` - `country.region` → `CityWithCountry.country.region` ![Selecting Data](../../img/data_multi_select.png) ## Using Tuples for Join Queries Instead of nested case classes, you can also use tuples to retrieve data from multiple tables: ```scala case class City(id: Long, name: String) case class Country(code: String, name: String, region: String) sql""" SELECT city.id, city.name, country.code, country.name, country.region FROM city JOIN country ON city.country_code = country.code """ .query[(City, Country)] // Query[(City, Country)] .to[List] // DBIO[List[(City, Country)]] .readOnly(conn) // IO[List[(City, Country)]] .unsafeRunSync() // List[(City, Country)] .foreach { case (city, country) => println(s"City: ${city.name}, Country: ${country.name}") } ``` It's important to note that when using tuples, the table name and the case class name need to match. That is, the `city` table is mapped to the `City` class, and the `country` table is mapped to the `Country` class. ## Table Aliases and Mapping When using aliases for tables in SQL statements, you need to match the case class name with that alias: ```scala // Case class names matching alias names case class C(id: Long, name: String) case class CT(code: String, name: String, region: String) sql""" SELECT c.id, c.name, ct.code, ct.name, ct.region FROM city AS c JOIN country AS ct ON c.country_code = ct.code """ .query[(C, CT)] // Query[(C, CT)] .to[List] // DBIO[List[(C, CT)]] .readOnly(conn) // IO[List[(C, CT)]] .unsafeRunSync() // List[(C, CT)] .foreach { case (city, country) => println(s"City: ${city.name}, Country: ${country.name}") } ``` ## Getting a Single Result (Option Type) If you want to get a single result or an optional result (0 or 1 record) instead of a list, you can use `.to[Option]`: ```scala case class User(id: Long, name: String, email: String) // Searching for a single user by ID sql"SELECT id, name, email FROM user WHERE id = ${userId}" .query[User] // Query[User] .to[Option] // DBIO[Option[User]] .readOnly(conn) // IO[Option[User]] .unsafeRunSync() // Option[User] .foreach(user => println(s"Found user: ${user.name}")) ``` If no result is found, `None` is returned, and if one is found, `Some(User(...))` is returned. ## Choosing Query Execution Methods ldbc provides different query execution methods depending on the purpose: - `.readOnly(conn)` - Used for read-only operations (such as SELECT statements) - `.commit(conn)` - Executes write operations in auto-commit mode - `.rollback(conn)` - Executes write operations and always rolls back (for testing) - `.transaction(conn)` - Executes operations within a transaction and commits only on success ```scala // Example of a read-only operation sql"SELECT * FROM users" .query[User] .to[List] .readOnly(conn) // Example of a write operation (auto-commit) sql"UPDATE users SET name = ${newName} WHERE id = ${userId}" .update .commit(conn) // Multiple operations in a transaction (for { userId <- sql"INSERT INTO users (name, email) VALUES (${name}, ${email})".returning[Long] _ <- sql"INSERT INTO user_roles (user_id, role_id) VALUES (${userId}, ${roleId})".update } yield userId).transaction(conn) ``` ## Combining Collection Operations with Queries By applying Scala collection operations to retrieved data, you can concisely describe more complex data processing: ```scala // Example of grouping users sql"SELECT id, name, department FROM employees" .query[(Long, String, String)] // ID, name, department .to[List] .readOnly(conn) .unsafeRunSync() .groupBy(_._3) // Group by department .map { case (department, employees) => (department, employees.map(_._2)) // Map to department name and list of employee names } .foreach { case (department, names) => println(s"Department: $department, Employees: ${names.mkString(", ")}") } ``` ## Summary ldbc provides features for retrieving data from databases in a type-safe and intuitive manner. In this tutorial, we covered: - The basic workflow for data retrieval - Single-column and multi-column queries - Mapping to case classes - Joining multiple tables and nested data structures - Getting single and multiple results - Various execution methods Use this knowledge to efficiently retrieve data from databases in your applications and maximize the benefits of Scala's type system. ## Next Steps Now that you understand how to retrieve data from databases in various formats, you've seen how type-safe mapping allows you to map database results directly to Scala data structures. Next, let's move on to [Updating Data](/en/tutorial/Updating-Data.md) to learn how to insert, update, and delete data. # Setup Welcome to the first step in getting started with ldbc! This page explains how to prepare your development environment and database. ## Requirements - JDK 21 or higher - Scala 3 - Docker (for database environment) - [Scala CLI](https://scala-cli.virtuslab.org/) (recommended) ## Database Setup First, start a MySQL database using Docker. Create the following docker-compose.yml file: ```yaml services: mysql: image: mysql:8.4.0 container_name: ldbc environment: MYSQL_USER: 'ldbc' MYSQL_PASSWORD: 'password' MYSQL_ROOT_PASSWORD: 'root' ports: - 13306:3306 volumes: - ./database:/docker-entrypoint-initdb.d healthcheck: test: [ "CMD", "mysqladmin", "ping", "-h", "localhost" ] timeout: 20s retries: 10 ``` Next, create the following SQL file in the `database` directory to set up initial data: ```sql -- 01-create-database.sql CREATE DATABASE IF NOT EXISTS sandbox_db; USE sandbox_db; -- Create tables CREATE TABLE IF NOT EXISTS `user` ( `id` INT NOT NULL AUTO_INCREMENT PRIMARY KEY, `name` VARCHAR(50) NOT NULL, `email` VARCHAR(100) NOT NULL, `created_at` TIMESTAMP DEFAULT CURRENT_TIMESTAMP, `updated_at` TIMESTAMP DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP ); CREATE TABLE IF NOT EXISTS `product` ( `id` INT NOT NULL AUTO_INCREMENT PRIMARY KEY, `name` VARCHAR(100) NOT NULL, `price` DECIMAL(10, 2) NOT NULL, `created_at` TIMESTAMP DEFAULT CURRENT_TIMESTAMP, `updated_at` TIMESTAMP DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP ); CREATE TABLE IF NOT EXISTS `order` ( `id` INT NOT NULL AUTO_INCREMENT PRIMARY KEY, `user_id` INT NOT NULL, `product_id` INT NOT NULL, `order_date` TIMESTAMP DEFAULT CURRENT_TIMESTAMP, `quantity` INT NOT NULL, `created_at` TIMESTAMP DEFAULT CURRENT_TIMESTAMP, `updated_at` TIMESTAMP DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP, FOREIGN KEY (user_id) REFERENCES `user` (id), FOREIGN KEY (product_id) REFERENCES `product` (id) ); -- Insert initial data INSERT INTO user (name, email) VALUES ('Alice', 'alice@example.com'), ('Bob', 'bob@example.com'), ('Charlie', 'charlie@example.com'); INSERT INTO product (name, price) VALUES ('Laptop', 999.99), ('Mouse', 19.99), ('Keyboard', 49.99), ('Monitor', 199.99); INSERT INTO `order` (user_id, product_id, quantity) VALUES (1, 1, 1), -- Alice ordered 1 Laptop (1, 2, 2), -- Alice ordered 2 Mice (2, 3, 1), -- Bob ordered 1 Keyboard (3, 4, 1); -- Charlie ordered 1 Monitor ``` Start the database using Docker Compose: ```bash docker compose up -d ``` ## Setting up a Scala Project This tutorial uses [Scala CLI](https://scala-cli.virtuslab.org/) to get started easily. If you haven't installed it yet, you can do so with the following command: ```bash # For macOS brew install Virtuslab/scala-cli/scala-cli # For other OS, please refer to Scala CLI's official website ``` ### Your First ldbc Project Create a new directory and set up your first ldbc project: ```bash mkdir ldbc-tutorial cd ldbc-tutorial touch FirstSteps.scala ``` Add the following code to `FirstSteps.scala`: ```scala //> using scala "3.3.5" //> using dep "io.github.takapi327::ldbc-dsl:0.3.0-beta11" //> using dep "io.github.takapi327::ldbc-connector:0.3.0-beta11" import cats.effect.* import cats.syntax.all.* import ldbc.connector.* import ldbc.dsl.* object FirstSteps extends IOApp.Simple: // A program that returns a simple constant val simpleProgram: DBIO[Int] = DBIO.pure(0) // Database connection configuration val provider = ConnectionProvider .default[IO]("127.0.0.1", 13306, "ldbc", "password", "sandbox_db") .setSSL(SSL.Trusted) def run: IO[Unit] = // Execute the program provider.use { conn => simpleProgram.readOnly(conn).flatMap { result => IO.println(s"Value retrieved from database: $result") } } ``` Run the program using Scala CLI: ```bash scala-cli FirstSteps.scala ``` If you see "Value retrieved from database: 42", you've succeeded! This doesn't actually query the database yet, but it confirms that you have the basic ldbc structure and connection setup working. ## Automatic Setup Script (Optional) We also provide a Scala CLI script that automatically handles all the setup: ```bash scala-cli https://github.com/takapi327/ldbc/tree/master/docs/src/main/scala/00-Setup.scala --dependency io.github.takapi327::ldbc-dsl:0.3.0-beta11 --dependency io.github.takapi327::ldbc-connector:0.3.0-beta11 ``` ## Next Steps You're now ready to use ldbc! Proceed to [Connection](/en/tutorial/Connection.md) to learn more about database connections. # Simple Programs Now that we've learned how to configure [connections](/en/tutorial/Connection.md), let's finally run some queries against the database. This page explains how to create basic database programs using ldbc. In ldbc, we use the `DBIO` monad to represent a series of database operations. This powerful abstraction allows you to combine multiple queries and perform database operations while enjoying the benefits of pure functional programming. The `DBIO` monad represents operations that will be executed using an actual database connection. This delays the execution of queries until an actual connection is established, allowing for efficient resource management. Note: The programs used on this page assume the environment built in the setup. ## First Program: Retrieving a Simple Value In our first program, we'll learn how to connect to the database and retrieve a simple calculation result. First, we use the `sql string interpolator` to create a query to the database. This feature allows us to write SQL queries safely and concisely. ```scala 3 // Create an SQL query and specify the return type val program: DBIO[Option[Int]] = sql"SELECT 2".query[Int].to[Option] ``` The above code does the following: 1. `sql"SELECT 2"`: Creates an SQL query using the SQL string interpolator 2. `.query[Int]`: Specifies that we expect a result of type `Int` 3. `.to[Option]`: Specifies that we want to get the result as an `Option[Int]` type (will be `None` if no result exists) The `to` method allows you to specify how to retrieve results. The main methods and their return types are as follows: | Method | Return Type | Notes | |--------------|----------------|------------------------------| | `to[List]` | `F[List[A]]` | Retrieves all results as a list | | `to[Option]` | `F[Option[A]]` | Expects 0 or 1 result, errors if more | | `unsafe` | `F[A]` | Expects exactly 1 result, errors otherwise | Next, we execute this program. We acquire a database connection, run the query, and display the results: ```scala 3 // Acquire a database connection and execute the program provider .use { conn => program.readOnly(conn).map(println(_)) } .unsafeRunSync() ``` In the above code: 1. `provider.use`: Acquires a connection from the database connection provider 2. `program.readOnly(conn)`: Executes the created query in read-only mode 3. `.map(println(_))`: Displays the result to standard output 4. `.unsafeRunSync()`: Executes the IO effect When you run this program, it will return the result `Some(2)` from the database. **Execute with Scala CLI** This program can be easily executed using Scala CLI: ```shell scala-cli https://github.com/takapi327/ldbc/tree/master/docs/src/main/scala/02-Program.scala --dependency io.github.takapi327::ldbc-dsl:0.3.0-beta11 --dependency io.github.takapi327::ldbc-connector:0.3.0-beta11 ``` ## Second Program: Combining Multiple Queries In real-world applications, you often want to execute multiple queries in a single transaction. The `DBIO` monad makes this easy. You can use for-comprehension to combine multiple queries into one larger program: ```scala 3 // Create a program by combining multiple queries val program: DBIO[(List[Int], Option[Int], Int)] = for result1 <- sql"SELECT 1".query[Int].to[List] // Get results as a list result2 <- sql"SELECT 2".query[Int].to[Option] // Get results as an Option result3 <- sql"SELECT 3".query[Int].unsafe // Get the result directly yield (result1, result2, result3) ``` In the above for-comprehension, we execute three different queries in sequence and bind each result to a variable. Finally, we return the three results as a tuple. The execution method is the same as before: ```scala 3 // Acquire a database connection and execute the program provider .use { conn => program.readOnly(conn).map(println(_)) } .unsafeRunSync() ``` The execution result will look like `(List(1), Some(2), 3)`. This is a tuple containing the results of the three queries. **Execute with Scala CLI** ```shell scala-cli https://github.com/takapi327/ldbc/tree/master/docs/src/main/scala/03-Program.scala --dependency io.github.takapi327::ldbc-dsl:0.3.0-beta11 --dependency io.github.takapi327::ldbc-connector:0.3.0-beta11 ``` ## Third Program: Data Update Operations In addition to reading, you can also create programs that write to the database. Here's an example of inserting a new record into a user table: ```scala 3 // Execute an INSERT statement and get the number of affected rows val program: DBIO[Int] = sql"INSERT INTO user (name, email) VALUES ('Carol', 'carol@example.com')".update ``` Using the `update` method, you can execute update queries such as INSERT, UPDATE, DELETE, etc. The return value is the number of affected rows (in this case, 1). When executing update queries, you need to use the `commit` method instead of `readOnly` to commit the transaction: ```scala 3 // Acquire a database connection, execute the update program, and commit provider .use { conn => program.commit(conn).map(println(_)) } .unsafeRunSync() ``` The `commit` method automatically enables `AutoCommit` and commits the transaction after the query execution. This ensures that changes are persisted to the database. **Execute with Scala CLI** ```shell scala-cli https://github.com/takapi327/ldbc/tree/master/docs/src/main/scala/04-Program.scala --dependency io.github.takapi327::ldbc-dsl:0.3.0-beta11 --dependency io.github.takapi327::ldbc-connector:0.3.0-beta11 ``` ## Transaction Control ldbc provides the following main transaction control methods: - `readOnly(conn)`: Executes in a read-only transaction (data updates are not allowed) - `commit(conn)`: Executes in auto-commit mode and commits changes upon success - `rollback(conn)`: Executes with auto-commit off and rolls back at the end - `transaction(conn)`: Executes with auto-commit off, commits on success, and rolls back on exception For complex transactions, using the `transaction` method handles commits and rollbacks appropriately: ```scala 3 val complexProgram: DBIO[Int] = for _ <- sql"INSERT INTO accounts (owner, balance) VALUES ('Alice', 1000)".update _ <- sql"INSERT INTO accounts (owner, balance) VALUES ('Bob', 500)".update count <- sql"SELECT COUNT(*) FROM accounts".query[Int].unsafe yield count provider .use { conn => complexProgram.transaction(conn).map(println(_)) } .unsafeRunSync() ``` In this program, if either INSERT fails, both changes are rolled back. Only when both succeed are the changes committed. ## Next Steps Now that you've learned how to execute basic queries, let's take a closer look at [Parameterized Queries](/en/tutorial/Parameterized-Queries.md) next. # Updating Data Now that we've learned how to retrieve data in [Selecting Data](/en/tutorial/Selecting-Data.md), let's look at how to write data to the database. This page explains the basics of Data Manipulation Language (DML) such as INSERT, UPDATE, and DELETE. ## Basics of Data Updating Write operations to the database behave somewhat differently from read operations because they change the state of the database. ldbc provides appropriate abstractions to safely perform these operations. The basic flow for performing write operations is as follows: 1. Create an SQL query using the `sql` interpolator 2. Call the appropriate method (`.update`, `.returning`, etc.) to specify the type of query 3. Execute the query with `.commit()` or `.transaction()` 4. Process the results ## Inserting Data (INSERT) ### Basic INSERT Operation To insert data, use the SQL `INSERT` statement and call ldbc's `.update` method. Here's an example of inserting a row into the `user` table: ```scala // Method to insert a new user into the user table def insertUser(name: String, email: String): DBIO[Int] = sql"INSERT INTO user (name, email) VALUES ($name, $email)" .update ``` Let's use this method to actually insert data and check the result: ```scala // Execute the insert operation insertUser("dave", "dave@example.com").commit(conn).unsafeRunSync() // The return value is the number of affected rows (in this case, 1) // Check the inserted data sql"SELECT id, name, email FROM user WHERE name = 'dave'" .query[(Int, String, String)] .to[Option] .readOnly(conn) .unsafeRunSync() .foreach { case (id, name, email) => println(s"ID: $id, Name: $name, Email: $email") } ``` The `.update` method returns the number of affected rows (in this case, 1). ### Retrieving Auto-generated Keys In many cases, tables have auto-generated keys such as auto-increment IDs. To retrieve this auto-generated key value at the time of insertion, use the `.returning[T]` method: ```scala // Method to insert and get the generated ID def insertUserAndGetId(name: String, email: String): DBIO[Long] = sql"INSERT INTO user (name, email) VALUES ($name, $email)" .returning[Long] ``` Let's use this method to insert a new user and get the auto-generated ID: ```scala // Insert and get the auto-generated ID val newUserId = insertUserAndGetId("frank", "frank@example.com") .commit(conn) .unsafeRunSync() println(s"New user ID: $newUserId") ``` **Note**: The `.returning` method only works with columns that have `AUTO_INCREMENT` set in MySQL. ### Retrieving Inserted Data If you want to retrieve all the information about the inserted data at the same time, you can combine two steps using the auto-generated key: ```scala // Case class representing a user case class User(id: Long, name: String, email: String) // Method to insert a user and return the inserted user's information def insertAndRetrieveUser(name: String, email: String): DBIO[Option[User]] = for id <- sql"INSERT INTO user (name, email) VALUES ($name, $email)".returning[Long] user <- sql"SELECT id, name, email FROM user WHERE id = $id".query[User].to[Option] yield user ``` Usage example: ```scala // Insert a user and get the inserted user's information insertAndRetrieveUser("grace", "grace@example.com") .commit(conn) .unsafeRunSync() .foreach { user => println(s"Inserted user: ID=${user.id}, Name=${user.name}, Email=${user.email}") } ``` ## Updating Data (UPDATE) To update data, use the SQL `UPDATE` statement and call the `.update` method similarly: ```scala // Method to update a user's email address def updateUserEmail(id: Long, newEmail: String): DBIO[Int] = sql"UPDATE user SET email = $newEmail WHERE id = $id" .update ``` Usage example: ```scala // Update a user's email address updateUserEmail(1, "alice+updated@example.com") .commit(conn) .unsafeRunSync() // Check the updated data sql"SELECT id, name, email FROM user WHERE id = 1" .query[User] .to[Option] .readOnly(conn) .unsafeRunSync() .foreach { user => println(s"Updated user: ID=${user.id}, Name=${user.name}, Email=${user.email}") } ``` ### Updating with Multiple Conditions It's also possible to update with complex conditions: ```scala // Update email addresses for users matching a specific name def updateEmailsByName(name: String, newEmail: String): DBIO[Int] = sql""" UPDATE user SET email = $newEmail WHERE name LIKE ${"%" + name + "%"} """.update ``` This example updates the email address of all users who match the specified name pattern. ## Deleting Data (DELETE) To delete data, use the SQL `DELETE` statement and call the `.update` method: ```scala // Delete a user by ID def deleteUser(id: Long): DBIO[Int] = sql"DELETE FROM user WHERE id = $id" .update ``` Usage example: ```scala // Delete a user deleteUser(5) .commit(conn) .unsafeRunSync() // Confirm deletion sql"SELECT COUNT(*) FROM user WHERE id = 5" .query[Int] .unsafe .readOnly(conn) .unsafeRunSync() match { case 0 => println("User with ID 5 has been deleted") case n => println(s"User with ID 5 still exists (count: $n)") } ``` ### Deleting Multiple Rows It's also possible to delete multiple rows at once that match a condition: ```scala // Delete all users with email addresses from a specific domain def deleteUsersByEmailDomain(domain: String): DBIO[Int] = sql"DELETE FROM user WHERE email LIKE ${"%@" + domain}" .update ``` ## Batch Processing (Bulk Operations) ### Bulk Insertion of Multiple Rows To efficiently insert many rows, you can specify multiple value sets in the `VALUES` clause: ```scala import cats.data.NonEmptyList // Bulk insert multiple users def insertManyUsers(users: NonEmptyList[(String, String)]): DBIO[Int] = val values = users.map { case (name, email) => sql"($name, $email)" } (sql"INSERT INTO user (name, email) VALUES " ++ Fragments.values(values)).update ``` Usage example: ```scala // Define multiple users val newUsers = NonEmptyList.of( ("greg", "greg@example.com"), ("henry", "henry@example.com"), ("irene", "irene@example.com") ) // Execute bulk insertion val insertedCount = insertManyUsers(newUsers).commit(conn).unsafeRunSync() println(s"Number of rows inserted: $insertedCount") // Should output "Number of rows inserted: 3" ``` ### Transactions for Multiple Operations To atomically execute multiple operations, use transactions. This ensures that either all operations succeed or all operations fail (rollback): ```scala // Example of inserting a user and related information for that user def createUserWithProfile(name: String, email: String, bio: String): DBIO[Long] = for userId <- sql"INSERT INTO user (name, email) VALUES ($name, $email)".returning[Long] profileId <- sql"INSERT INTO user_profile (user_id, bio) VALUES ($userId, $bio)".returning[Long] yield userId ``` By executing this method using `.transaction`, both user insertion and profile insertion are processed as a single transaction: ```scala // Execute within a transaction val userId = createUserWithProfile("julia", "julia@example.com", "Programmer") .transaction(conn) .unsafeRunSync() println(s"Created user ID: $userId") ``` If the insertion into the `user_profile` table fails, the insertion into the `user` table will also automatically roll back. ## Selecting Query Execution Methods ldbc provides the following query execution methods for data update operations: - `.commit(conn)` - Executes a write operation in auto-commit mode (suitable for simple, single update operations) - `.rollback(conn)` - Executes a write operation and always rolls back (for testing and verification) - `.transaction(conn)` - Executes operations within a transaction, committing only upon success (suitable when you want to treat multiple operations as a single unit) ```scala // Execution in auto-commit mode (simple single operation) updateUserEmail(1, "new@example.com").commit(conn) // Execution for testing (changes are not saved) insertUser("test", "test@example.com").rollback(conn) // Multiple operations within a transaction (all succeed or all fail) (for { userId <- insertUser("kate", "kate@example.com").returning[Long] _ <- sql"INSERT INTO user_roles (user_id, role_id) VALUES ($userId, 2)".update } yield userId).transaction(conn) ``` ## Error Handling It's also important to handle errors that may occur during data update operations. In ldbc, you can use the `IO` monad from `cats-effect` to process errors: ```scala import cats.effect.unsafe.IORuntime import cats.effect.IO implicit val runtime: IORuntime = IORuntime.global // Example of error handling def safeUpdateUser(id: Long, newEmail: String): Unit = { val result = updateUserEmail(id, newEmail) .commit(conn) .attempt // Convert IO result to Either[Throwable, Int] .unsafeRunSync() result match { case Right(count) => println(s"Number of rows updated: $count") case Left(error) => println(s"An error occurred: ${error.getMessage}") } } // Execution example (updating with a non-existent ID) safeUpdateUser(9999, "nonexistent@example.com") ``` ## Summary When performing data update operations with ldbc, it's good to remember the following points: 1. Use `.update` or `.returning[T]` (to retrieve auto-generated keys) for **insert operations** 2. Use `.update` for **update operations**, specifying target rows with a WHERE clause 3. Also use `.update` for **delete operations** (though it's a DELETE statement, the operation method is common) 4. Use `.transaction()` to execute **multiple operations** atomically 5. Use `.commit()` for simple **single operations** 6. Use `.rollback()` for **testing purposes** to discard changes By appropriately combining these data update operations, you can efficiently and safely manipulate your database. ## Next Steps Now you understand how to insert, update, and delete data in your database. At this point, you've learned all the basics of using ldbc. You've acquired the knowledge necessary for everyday database operations, including database connection, query execution, data reading and writing, and transaction management. From here, we'll move on to more advanced topics. Let's start with [Error Handling](/en/tutorial/Error-Handling.md) to learn how to properly handle exceptions that can occur in database operations. # ldbc Tutorial ldbc is a type-safe MySQL database client written in Scala 3. It allows you to write SQL safely and concisely while leveraging Scala's powerful type system to reduce database operation errors. ## Key Features of ldbc - **Type Safety**: Detects SQL query issues at compile time - **Concise Syntax**: Intuitive query writing using SQL string interpolation - **Cats Effect Integration**: Supports pure functional programming - **Schema Auto-generation**: Generate Scala code from SQL - **Query Builder**: Build type-safe queries without writing SQL directly ## Quick Start Let's first take a look at the basic usage of ldbc. Here's a simple example of retrieving user information from the database and adding a new user: ```scala 3 import cats.effect.* import cats.syntax.all.* import ldbc.connector.* import ldbc.dsl.* // Database connection configuration val provider = ConnectionProvider .default[IO]("127.0.0.1", 3306, "ldbc", "password", "ldbc") .setSSL(SSL.Trusted) // Executing queries val program = for // Retrieving user list users <- sql"SELECT id, name FROM user".query[(Int, String)].to[List] // Adding a new user _ <- sql"INSERT INTO user (name, email) VALUES ('Taro Yamada', 'yamada@example.com')".update // Checking user count after update count <- sql"SELECT COUNT(*) FROM user".query[Int].unsafe yield (users, count) // Executing the program provider.use { conn => program.transaction(conn) } ``` ## How to Proceed with this Tutorial This tutorial series is structured to help you learn ldbc step by step. We recommend proceeding in the following order: ### 1. Setup First, let's prepare the environment for using ldbc. - [Setup](/en/tutorial/Setup.md) - Preparing development environment and database - [Connection](/en/tutorial/Connection.md) - How to connect to a database ### 2. Basic Operations Next, let's learn frequently used features for everyday tasks. - [Simple Program](/en/tutorial/Simple-Program.md) - Execute your first simple queries - [Parameters](/en/tutorial/Parameterized-Queries.md) - Queries with parameters - [Selecting Data](/en/tutorial/Selecting-Data.md) - Retrieving data with SELECT statements - [Updating Data](/en/tutorial/Updating-Data.md) - INSERT/UPDATE/DELETE operations - [Database Operations](/en/tutorial/Database-Operations.md) - Transaction management ### 3. Advanced Operations After understanding the basics, let's move on to more advanced features. - [Error Handling](/en/tutorial/Error-Handling.md) - Exception handling - [Logging](/en/tutorial/Logging.md) - Query logging - [Custom Data Types](/en/tutorial/Custom-Data-Type.md) - Support for custom data types - [Query Builder](/en/tutorial/Query-Builder.md) - Type-safe query construction - [Schema](/en/tutorial/Schema.md) - Table definitions - [Schema Code Generation](/en/tutorial/Schema-Code-Generation.md) - Code generation from SQL ## Why Choose ldbc? - **Low Learning Curve**: String interpolation that leverages your existing SQL knowledge - **Safety**: Early bug detection through compile-time type checking - **Productivity**: Reduction of boilerplate code and automatic code generation - **Performance**: Optimized MySQL connection management - **Benefits of Functional Programming**: Composability through Cats Effect integration Now, let's start with [Setup](/en/tutorial/Setup.md)! ## Detailed Navigation @:navigationTree { entries = [ { target = "/en/tutorial", depth = 2 } ] }