Migration Notes (from 0.2.x to 0.3.x)
Package
Change package name
0.2.x | 0.3.x |
---|---|
ldbc-core | ldbc-schema |
New Packages
Three new packages have been added.
Module / Platform | JVM | Scala Native | Scala.js |
---|---|---|---|
ldbc-connector |
✅ | ✅ | ✅ |
jdbc-connector |
✅ | ❌ | ❌ |
ldbc-statement |
✅ | ✅ | ✅ |
All Packages
Module / Platform | JVM | Scala Native | Scala.js |
---|---|---|---|
ldbc-sql |
✅ | ✅ | ✅ |
ldbc-connector |
✅ | ✅ | ✅ |
jdbc-connector |
✅ | ❌ | ❌ |
ldbc-dsl |
✅ | ✅ | ✅ |
ldbc-statement |
✅ | ✅ | ✅ |
ldbc-query-builder |
✅ | ✅ | ✅ |
ldbc-schema |
✅ | ✅ | ✅ |
ldbc-schemaSpy |
✅ | ❌ | ❌ |
ldbc-codegen |
✅ | ✅ | ✅ |
ldbc-hikari |
✅ | ❌ | ❌ |
ldbc-plugin |
✅ | ❌ | ❌ |
Functional change
Connector switching function
Support for jdbc and ldbc connection switching has been added to the Scala MySQL connector.
This change gives developers the flexibility to choose database connections using either the jdbc or ldbc libraries depending on their project requirements. This allows developers to take advantage of the features of the different libraries, increasing flexibility in connection setup and operation.
How to change
First, common dependencies are set up.
libraryDependencies += "io.github.takapi327" %% "ldbc-dsl" % "0.3.0-beta11"
In cross-platform projects (JVM, JS, native)
libraryDependencies += "io.github.takapi327" %%% "ldbc-dsl" % "0.3.0-beta11"
The dependency package used depends on whether the database connection is made through a connector using the Java API or through a connector provided by ldbc.
Use of JDBC connectors
libraryDependencies += "io.github.takapi327" %% "jdbc-connector" % "0.3.0-beta11"
Using the ldbc connector
libraryDependencies += "io.github.takapi327" %% "ldbc-connector" % "0.3.0-beta11"
In cross-platform projects (JVM, JS, native)
libraryDependencies += "io.github.takapi327" %%% "ldbc-connector" % "0.3.0-beta11"
Usage rules
Use of JDBC connectors
import jdbc.connector.*
val ds = new com.mysql.cj.jdbc.MysqlDataSource()
ds.setServerName("127.0.0.1")
ds.setPortNumber(13306)
ds.setDatabaseName("world")
ds.setUser("ldbc")
ds.setPassword("password")
val provider = ConnectionProvider.fromDataSource(ex, ExecutionContexts.synchronous)
Using the ldbc connector
import ldbc.connector.*
val provider =
ConnectionProvider
.default[IO]("127.0.0.1", 3306, "ldbc", "password", "ldbc")
.setSSL(SSL.Trusted)
The process of connecting to the database can be done using connections established by each method.
val result: IO[(List[Int], Option[Int], Int)] = provider.use { conn =>
(for
result1 <- sql"SELECT 1".query[Int].to[List]
result2 <- sql"SELECT 2".query[Int].to[Option]
result3 <- sql"SELECT 3".query[Int].unsafe
yield (result1, result2, result3)).readOnly(conn)
}
Destructive change
Extension of plain query construction
The determination of the type to be retrieved by the database connection method using plain query had been specifying the type to be retrieved and its format (list or option) all at once.
This modification changes this and makes the internal logic common by separating the specification of the type to be retrieved and its format. This makes the plain query syntax more similar to doobie, and should allow doobie users to use it without confusion.
before
sql"SELECT id, name, age FROM user".toList[(Long, String, Int)].readOnly(connection)
sql"SELECT id, name, age FROM user WHERE id = ${1L}".headOption[User].readOnly(connection)
after
sql"SELECT id, name, age FROM user".query[(Long, String, Int)].to[List].readOnly(connection)
sql"SELECT id, name, age FROM user WHERE id = ${1L}".query[User].to[Option].readOnly(connection)
AUTO INCREMENT value acquisition method renaming
The API updateReturningAutoGeneratedKey
, which converts values generated by an AUTO INCREMENT column in the update API, has been renamed to returning
.
This is a feature of MySQL, which returns the value generated by AUTO INCREMENT when inserting data, but other RDBs behave differently and may return values other than those generated by AUTO INCREMENT. API names were changed early on to make the limited API names more extensible for future extensions.
before
sql"INSERT INTO `table`(`id`, `c1`) VALUES ($None, ${ "column 1" })".updateReturningAutoGeneratedKey[Long]
after
sql"INSERT INTO `table`(`id`, `c1`) VALUES ($None, ${ "column 1" })".returning[Long]
How to build a query builder
Previously, the query builder could not be used without building the table schema.
In this update, changes have been made to allow the Query Builder to be used in a simpler manner.
before
First, create a table schema corresponding to the model,
case class User(
id: Long,
name: String,
age: Option[Int],
)
val userTable = Table[User]("user")( // CREATE TABLE `user` (
column("id", BIGINT, AUTO_INCREMENT, PRIMARY_KEY), // `id` BIGINT NOT NULL AUTO_INCREMENT PRIMARY KEY,
column("name", VARCHAR(255)), // `name` VARCHAR(255) NOT NULL,
column("age", INT.UNSIGNED.DEFAULT(None)), // `age` INT unsigned DEFAULT NULL
)
Next, use the table schema to build the TableQuery
.
val tableQuery = TableQuery[IO, User](userTable)
The last step was query construction.
val result: IO[List[User]] = connection.use { conn =>
tableQuery.selectAll.toList[User].readOnly(conn)
// "SELECT `id`, `name`, `age` FROM user"
}
after
This change will allow the model to be built and
import ldbc.dsl.codec.Codec
import ldbc.query.builder.Table
case class User(
id: Long,
name: String,
age: Option[Int],
) derives Table
object User:
given Codec[User] = Codec.derived[User]
Next, initialize the Table
.
import ldbc.query.builder.Table
val userTable = TableQuery[User]
Finally, query construction is performed to make it available.
val result: IO[List[User]] = provider.use { conn =>
userTable.selectAll.query.to[List].readOnly(conn)
// "SELECT `id`, `name`, `age` FROM user"
}
Building a query builder using Schema
The previous method of building a Table that mimics a schema has been replaced by building a Table type using the Schema project. In the following, we will look at the construction of a Table type corresponding to the User model.
case class User(
id: Long,
name: String,
age: Option[Int],
)
Before
Previously, it was necessary to create an instance of Table directly; it was mandatory to pass as arguments of Table the corresponding columns in the same order as the properties possessed by the User class, and to set the data types of the columns as well.
TableQuery with this table type was implemented using Dynamic, which allows type-safe access, but could not be complemented by the development tools.
In addition, this construction method was a bit slower in compile time than class generation.
val userTable = Table[User]("user")(
column("id", BIGINT, AUTO_INCREMENT, PRIMARY_KEY),
column("name", VARCHAR(255)),
column("age", INT.UNSIGNED.DEFAULT(None)),
)
After
In this modification, the creation of the Table type has been changed to a method of creating a class that inherits from Table. In addition, the data type of a column is no longer mandatory and can be set arbitrarily by the implementer.
This change to a construction method similar to Slick makes it more familiar to implementors.
class UserTable extends Table[User]("user"):
def id: Column[Long] = column[Long]("id")
def name: Column[String] = column[String]("name")
def age: Column[Option[Int]] = column[Option[Int]]("age")
override def * : Column[User] = (id *: name *: age).to[User]
The data type of a column can still be set. This setting is used, for example, when generating a schema using this table class.
class UserTable extends Table[User]("user"):
def id: Column[Long] = column[Long]("id", BIGINT, AUTO_INCREMENT, PRIMARY_KEY)
def name: Column[String] = column[String]("name", VARCHAR(255))
def age: Column[Option[Int]] = column[Option[Int]]("age", INT.UNSIGNED.DEFAULT(None))
override def * : Column[User] = (id *: name *: age).to[User]
There is also a column definition method that expresses the data type. The above definition method can be rewritten as follows In this definition method, the column name can be used as a variable name, so there is no need to pass the column name as an argument.
class UserTable extends Table[User]("user"):
- def id: Column[Long] = column[Long]("id", BIGINT, AUTO_INCREMENT, PRIMARY_KEY)
- def name: Column[String] = column[String]("name", VARCHAR(255))
- def age: Column[Option[Int]] = column[Option[Int]]("age", INT.UNSIGNED.DEFAULT(None))
+ def id: Column[Long] = bigint().autoIncrement.primaryKey
+ def name: Column[String] = varchar(255)
+ def age: Column[Option[Int]] = int().unsigned.defaultNull
override def * : Column[User] = (id *: name *: age).to[User]
Column names can be formatted by passing Naming implicitly. The default is CamelCase, but to change it to PascalCase, do the following
class UserTable extends Table[User]("user"):
given Naming = Naming.PASCAL
def id: Column[Long] = bigint().autoIncrement.primaryKey
def name: Column[String] = varchar(255)
def age: Column[Option[Int]] = int().unsigned.defaultNull
override def * : Column[User] = (id *: name *: age).to[User]
If you want to change the format of a specific column, you can define it by passing the column name as an argument.
class UserTable extends Table[User]("user"):
def id: Column[Long] = bigint("ID").autoIncrement.primaryKey
def name: Column[String] = varchar("NAME", 255)
def age: Column[Option[Int]] = int("AGE").unsigned.defaultNull
override def * : Column[User] = (id *: name *: age).to[User]
Support for custom data types
When using user-defined data types, custom data types were supported using ResultSetReader
and Parameter
.
With this update, the method of supporting custom data types using ResultSetReader
and Parameter
has changed.
Encoder
Changed from Parameter
to Encoder
to dynamically embed in the query string.
This eliminates the need for users to write redundant processing to receive the Effect Type, allowing for simpler implementation and use as a parameter for custom data types.
enum Status(val code: Int, val name: String):
case Active extends Status(1, "Active")
case InActive extends Status(2, "InActive")
-given Parameter[Status] with
- override def bind[F[_]](
- statement: PreparedStatement[F],
- index: Int,
- status: Status
- ): F[Unit] = statement.setInt(index, status.code)
+given Encoder[Status] = Encoder[Int].contramap(_.code)
The Encoder
encoding process can only return Scala types that can be handled by PreparedStatement
.
Currently, the following types are supported
Scala Type | Methods called in PreparedStatement |
---|---|
Boolean |
setBoolean |
Byte |
setByte |
Short |
setShort |
Int |
setInt |
Long |
setLong |
Float |
setFloat |
Double |
setDouble |
BigDecimal |
setBigDecimal |
String |
setString |
Array[Byte] |
setBytes |
java.time.LocalDate |
setDate |
java.time.LocalTime |
setTime |
java.time.LocalDateTime |
setTimestamp |
None |
setNull |
Encoder can also compose multiple types to create a new type.
val encoder: Encoder[(Int, String)] = Encoder[Int] *: Encoder[String]
Composite types can also be converted to arbitrary classes.
case class Status(code: Int, name: String)
given Encoder[Status] = (Encoder[Int] *: Encoder[String]).to[Status]
Decoder
Changed the process of getting data from a ResultSet
from a ResultSetReader
to a Decoder
.
-given ResultSetReader[IO, Status] =
- ResultSetReader.mapping[IO, Int, Status](code => Status.fromCode(code))
+given Decoder[Status] = Decoder[Int].map(code => Status.fromCode(code))
Decoder can also composite multiple types to create a new type.
val decoder: Decoder[(Int, String)] = Decoder[Int] *: Decoder[String]
Composite types can also be converted to arbitrary classes.
case class Status(code: Int, name: String)
given Decoder[Status] = (Decoder[Int] *: Decoder[String]).to[Status]
Introducing Codec
Codec
is a combination of Encoder
and Decoder
.
enum Status(val code: Int, val name: String):
case Active extends Status(1, "Active")
case InActive extends Status(2, "InActive")
given Codec[Status] = Codec[Int].imap(Status.fromCode)(_.code)
Codec can also composite multiple types to create a new type.
val codec: Codec[(Int, String)] = Codec[Int] *: Codec[String]
Composite types can also be converted to arbitrary classes.
case class Status(code: Int, name: String)
given Codec[Status] = (Codec[Int] *: Codec[String]).to[Status]
Codec is a combination of Encoder
and Decoder
, so it can perform the conversion process to each type.
val encoder: Encoder[Status] = Codec[Status].asEncoder
val decoder: Decoder[Status] = Codec[Status].asDecoder
This change allows users to use Codec
to combine Encoder
and Decoder
.
This allows users to convert retrieved records into nested hierarchical data.
case class City(id: Int, name: String, countryCode: String)
case class Country(code: String, name: String)
case class CityWithCountry(city: City, country: Country)
sql"SELECT city.Id, city.Name, city.CountryCode, country.Code, country.Name FROM city JOIN country ON city.CountryCode = country.Code".query[CityWithCountry]
Starting with Codec, Encoder
and Decoder
are resolved implicitly, so users do not need to specify these types explicitly.
However, if there are many properties in the model, implicit lookups may fail.
[error] |Implicit search problem too large.
[error] |an implicit search was terminated with failure after trying 100000 expressions.
[error] |The root candidate for the search was:
[error] |
[error] | given instance given_Decoder_P in object Decoder for ldbc.dsl.codec.Decoder[City]}
In such cases, raising the search limit in the compile options may solve the problem.
scalacOptions += "-Ximplicit-search-limit:100000"
However, optional limit extensions may lead to amplification of compile time. In that case, the solution may be to manually construct an arbitrary type as follows.
given Decoder[City] = Decoder.derived[City]
// Or given Decoder[City] = (Decoder[Int] *: Decoder[String] *: Decoder[Int] *: ....).to[City]
given Encoder[City] = Encoder.derived[City]
// Or given Encoder[City] = (Encoder[Int] *: Encoder[String] *: Encoder[Int] *: ....).to[City]
Alternatively, you can use Codec
to solve this problem by combining Encoder
and Decoder
.
given Codec[City] = Codec.derived[City]
// Or given Codec[City] = (Codec[Int] *: Codec[String] *: Codec[Int] *: ....).to[City]
Change column refinement method
Until now, column refinement has simply grouped columns to be used as tuples.
cityTable.select(city => (city.id, city.name))
However, there was a problem with this. Columns are types with a single type parameter, and since Scala2 had a limit on the number of Tuples, it was necessary to create a boilerplate or something that could handle the full number of Tuples. In this case, the dynamic Tuple was treated as a Tuple or Tuple.Map, so if you wanted to access a Column type, you had to cast the type using asInstanceOf, since that type could only be treated as a Tuple. Casting the type, of course, would have made the code more complex, since type safety would have been lost.
To solve this problem, we decided to adopt one of the same TypeLevel projects, twiddles.
With twiddles, we will be able to compose columns more easily.
cityTable.select(city => city.id *: city.name)
It also eliminates the need for unsafe typecasting, since the internal code can just use Column[T]
instead of Tuple.
Twiddles also make it easier to convert the composite result to another type.
case class City(id: Long, name: String)
def id: Column[Int] = column[Int]("ID")
def name: Column[String] = column[String]("Name")
def city: Column[City] = (id *: name).to[City]
Change from Table to TableQuery
Previously, the same Table type was used to construct queries from the model using the table type and table information.
case class City(id: Long, name: String) derives Table
val cityTable = Table[City]
However, this implementation was easy to get wrong because the same type could be used to represent two things.
cityTable.select(city => city.insert(???))
Development tools such as IDEs can cause no small amount of confusion to implementers because they complement all available APIs.
To solve this problem, we have separated the Table and TableQuery types.
case class City(id: Long, name: String) derives Table
val cityTable = TableQuery[City]
Table name customization can be done in Table's derived
case class City(
id: Int,
name: String,
countryCode: String,
district: String,
population: Int
)
object City:
given Table[City] = Table.derived[City]("city")
Change in the way the update statement is constructed
Previously, Update Statements had to be set one for each column to be updated. This implementation is convenient if you want to update several columns individually, but it is very tedious to write a set for each additional column you want to update.
cityTable
.update("id", 1L)
.set("name", "Tokyo")
.set("population", 1, false)
With this update, columns can now be combined, allowing multiple columns to be specified together for update processing.
cityTable
.update(city => city.id *: city.name)((1L, "Tokyo"))
.set(_.population, 1, false)
With this update, columns can now be combined and multiple columns can be specified together for the update process. set can be used to update only certain columns. It is also possible to use set to set update conditions so that a query can be created that updates additional columns only if the condition is positive.
Change the way table joins are constructed
Previously, table joins were constructed by setting the join condition as the second argument.
cityTable.join(countryTable)((c, co) => c.countryCode === co.code)
With this change, table join conditions must be set with the on
API. This change is the result of an internal implementation change.
cityTable.join(countryTable).on((c, co) => c.countryCode === co.code)