Quill Changelog
Starting from the versions superior v4.6.1, the release notes and changelog will be provided in the GitHub Releases of the project. See https://github.com/zio/zio-quill/releases
4.6.1
- Remove support of Scala 2.11 && Update ZIO to 2.0.12
- Remove support of Scala 2.11
- Update Scala versions
- Update ZIO to 2.0.12
- Update zio-logging to 2.1.12
- Update sbt to 1.7.3
- Update sbt-scoverage to 2.0.0
- Update zio-json to 0.5.0
Notes:
- Besides for Scala-version support update and the dependency bumps (thank you Guizmaii and Juliano!) the purpose of this release is to set the stage of future infrastructure changes e.g. adding Scalafmt to do formatting instead of the long-standing scalariform.
4.6.0
4.5.0
- Remove the need to make things embedded, happens automatically
- Date encoders for major date types. Extensible context.
- Warning about embedding fields that should have encoders
- jasync zio current schema configuration, jasync version update
Migration Notes:
- It is no longer necessary to do extend
Embedded
for case classes that should be embedded within an entity. In the case that the embedded case class "looks" like it should be encoded/decoded (i.e. it has only one field), an additional warning has been introduced to notify the user of this potential issue.
4.4.1
4.4.0
4.3.0
- Values clause batch insert
- Slightly better batch logging
- "transaction" supports ZIO effects with mixed environments
4.2.0
- Implement ZIO-Idiomatic JDBC Context
- Update idiomatic pattern based on discussions
- Implement ZIO idiomatic pattern for cassandra
- Add switch to manually disable returning/output-clauses
- cassandra - update if exists
- Change infix”$content” to sql”$content”
- Remove mysql as
Migration Notes:
- The
infix
interpolator is now deprecated because in Scala 2, infix is a keyword. Instead ofinfix"MyUdf(${person.name})"
usesql"MyUdf(${person.name})"
. For contexts such as Doobie that already have ansql
interpolator. Importcontext.compat._
and use theqsql
interpolator instead.
4.1.0
4.0.0
3.19.0
3.18.0
- Check all columns for null-ness for Option[Product] to be None
- Fixing Correlated Subquery Issues
- Corrects Like operator generating wrong SQLs
- Implement filterIfDefined
- Remove invalid 'AS' for Oracle Queries
- Remove twitter-chill library
Version Bumps:
- sbt-scalajs-crossproject to 1.2.0
- logback-classic to 1.2.11
- h2 to 2.1.212
- zio, zio-streams to 1.0.14
- cassandra-driver-core to 3.11.2
- java-driver-core to 4.14.1
- scala-collection-compat to 2.7.0
- mysql-connector-java to 8.0.29
- scala3-library, ... to 3.1.2
- sbt-sonatype to 3.9.13
- postgresql to 42.3.6
Migration Notes:
- As a result of 2504, the handling of optional-product rows (technically parts of rows) is now different. Whereas before, if any non-optional column of an optional-product row was null, then entre optional-product would be null. Now however, an optional-product will only be null if every column inside is null. For example, before, if a query returning
Person(name:Option(Name(first:String, last:String)), age: Int)
resulted in the rowResultRow("Joe", null, 123)
before the entity would be decoded intoPerson(None, 123)
(i.e. the optional-productOption[Name]
would decode toNone
).
Now however,Option[Name]
only decodes toNone
if every column inside it is null. This means that theResultRow("Joe", null, 123)
decodes toPerson(Name("Joe", 0 /*default-placeholder for null*/), 123)
. Only when the bothfirst
andlast
columns in Name are null i.e.ResultRow(null, null, 123)
will the result be:Person(None, 123)
. Have a look at the PR 2504 as well as it's corresponding issue 2505 for more details on how this works and the rationale for it.
3.16.5
3.16.4
- Support Spark 3.2.x and Scala 2.13 in quill-spark
- Fix dynamic query quat error
- Implement DistinctOn
- Sync Contexts with ProtoQuill and move out ProtoQuill conflicts
- Fix the typo
ZioCassandraSession
in document.
3.16.3
Note
- This change is to allow ProtoQuill transition to BooPickle AST Serialization in https://github.com/zio/zio-protoquill/pull/72
3.16.2
3.16.1
3.16.0
Migration Notes
- This change removes the deprecated
EntityQuery.insert(CaseClass)
andEntityQuery.update(CaseClass)
APIs that have been updated toEntityQuery.insertValue(CaseClass)
andEntityQuery.updateValue(CaseClass)
. This is the only change in this release so that you can update when ready. This change is needed due to the upstream Dotty issue: lampepfl/dotty#14043.
3.15.0
Migration Notes
- Similar to
EntityQuery.insert(CaseClass)
, the methodEntityQuery.update(CaseClass)
e.g.query[Person].update(Person("Joe", 123))
has been replaced withupdateValue
. The originalinsert
method has been deprecated and will be removed in an upcoming Quill release.
3.14.1
3.14.0
3.13.0
- JAsync ZIO implementation
- cassandra-alpakka
- Need to change EntityQuery.insert(CaseClass) to EntityQuery.insertValue(CaseClass) for upstream Scala 3 issues.
- Update ScalaJS to latest
- Work on removing tuple elaboration
- Option to Disable Nested Subexpansion
- Remove deprecated async modules
- Add Scala 3 cross-build for quill-engine
- Move quill-core-portable & quill-sql-portable to common quill-engine module
- Sheath leaf map clauses that cannot be reduced so still have their column in queries
Migration Notes
- The method
EntityQuery.insert(CaseClass)
e.g.query[Person].insert(Person("Joe", 123))
has been replaced withinsertValue
. The originalinsert
method has been deprecated and will be removed in the next Quill release. - The
quill-async
modules using Mauricio's deprecated library (here) have been removed. Please move to thequill-jasync
libraries as soon as possible. - Quill for ScalaJS has been updated to ScalaJS 1.8.
quill-core-portable
andquill-sql-portable
are now merged into a cross-builtquill-engine
module.- In 3.12.0 addition of field-aliases has been introduced in sub-queries but #2340
then occurred. A compile-time switch
-Dquill.query.subexpand=false
has been introduced to disable the feature until it can be fixed.
3.12.0
- cassandra - Datastax4x upgrade
- Implement dynamic query caching
- Fix Quat-based query schema rename issue
- forUpdate via infix
- Disable file infra log by default
- Fix
QIO.apply
function type - doc: update CODEGEN.md
Migration Notes - Datastax Drivers:
The Datastax drivers have been moved to Version 4, this adds support for many new features with the caveat that the configuration file format must be changed. In Version 4, the Datastax standard configuration file format and properties are in the HOCON format. They are used to configure the driver.
Sample HOCON:
MyCassandraDb {
preparedStatementCacheSize=1000
keyspace=quill_test
session {
basic.contact-points = [ ${?CASSANDRA_CONTACT_POINT_0}, ${?CASSANDRA_CONTACT_POINT_1} ]
basic.load-balancing-policy.local-datacenter = ${?CASSANDRA_DC}
basic.request.consistency = LOCAL_QUORUM
basic.request.page-size = 3
}
}
The session
entry values and keys are described in the datastax documentation:
Reference configuration
The ZioCassandraSession constructors:
val zioSessionLayer: ZLayer[Any, Throwable, CassandraZioSession] =
CassandraZioSession.fromPrefix("MyCassandraDb")
run(query[Person])
.provideCustomLayer(zioSessionLayer)
Additional parameters can be added programmatically:
val zioSessionLayer: ZLayer[Any, Throwable, CassandraZioSession] =
CassandraZioSession.fromContextConfig(LoadConfig("MyCassandraDb").withValue("keyspace", ConfigValueFactory.fromAnyRef("data")))
run(query[Person])
.provideCustomLayer(zioSessionLayer)
session.queryOptions.fetchSize=N
config entry should be replaced by
basic.request.page-size=N
testStreamDB {
preparedStatementCacheSize=1000
keyspace=quill_test
session {
...
basic.request.page-size = 3
}
...
}
Migration Notes - Query Log File:
Production of the query-log file queries.txt
has been disabled by default due to issues with SBT
and metals. In order to use it, launch the compiler JVM (e.g. SBT) with the argument -Dquill.log.file=my_queries.sql
or set the quill_log_file
environment variable (e.g. export quill_log_file=my_queries.sql
).
Migration Notes - Monix:
The monix context wrapper MonixJdbcContext.Runner
has been renamed to MonixJdbcContext.EffectWrapper
.
The type Runner
needs to be used by ProtoQuill to define quill-context-specific execution contexts.
3.11.0
- Implement
transaction
on outer zio-jdbc-context using fiber refs - Feature Request: write compile-time queries to a file
transaction
supports ZIO effects with mixed environments- Apple M1 Build Updates & Instructions
Migration Notes:
All ZIO JDBC context run
methods have now switched from have switched their dependency (i.e. R
) from Connection
to
DataSource
. This should clear up many innocent errors that have happened because how this Connection
is supposed
to be provided was unclear. As I have come to understand, nearly all DAO service patterns involve grabbing a connection from a
pooled DataSource, doing one single crud operation, and then returning the connection back to the pool. The new JDBC ZIO context
memorialize this pattern.
The signature of
QIO[T]
has been changed fromZIO[Connection, SQLException, T]
toZIO[DataSource, SQLException, T]
. a new type-aliasQCIO[T]
(lit. Quill Connection IO) has been introduced that representsZIO[Connection, SQLException, T]
.If you are using the
.onDataSource
command, migration should be fairly easy. Whereas previously, a usage of quill-jdbc-zio 3.10.0 might have looked like this:object MyPostgresContext extends PostgresZioJdbcContext(Literal); import MyPostgresContext._
val zioDS = DataSourceLayer.fromPrefix("testPostgresDB")
val people = quote {
query[Person].filter(p => p.name == "Alex")
}
MyPostgresContext.run(people).onDataSource
.tap(result => printLine(result.toString))
.provideCustomLayer(zioDs)In 3.11.0 simply remove the
.onDataSource
in order to use the new context.object MyPostgresContext extends PostgresZioJdbcContext(Literal); import MyPostgresContext._
val zioDS = DataSourceLayer.fromPrefix("testPostgresDB")
val people = quote {
query[Person].filter(p => p.name == "Alex")
}
MyPostgresContext.run(people) // Don't need `.onDataSource` anymore
.tap(result => printLine(result.toString))
.provideCustomLayer(zioDs)If you are creating a Hikari DataSource directly, passing of the dependency is now also simpler. Instead having to pass the Hikari-pool-layer into
DataSourceLayer
, just provide the Hikari-pool-layer directly.From this:
def hikariConfig = new HikariConfig(JdbcContextConfig(LoadConfig("testPostgresDB")).configProperties)
def hikariDataSource: DataSource with Closeable = new HikariDataSource(hikariConfig)
val zioConn: ZLayer[Any, Throwable, Connection] =
Task(hikariDataSource).toLayer >>> DataSourceLayer.live
MyPostgresContext.run(people) .tap(result => printLine(result.toString)) .provideCustomLayer(zioConn)
To this:
```scala
def hikariConfig = new HikariConfig(JdbcContextConfig(LoadConfig("testPostgresDB")).configProperties)
def hikariDataSource: DataSource with Closeable = new HikariDataSource(hikariConfig)
val zioDS: ZLayer[Any, Throwable, DataSource] =
Task(hikariDataSource).toLayer // Don't need `>>> DataSourceLayer.live` anymore!
MyPostgresContext.run(people)
.tap(result => printLine(result.toString))
.provideCustomLayer(zioConn)
If you want to provide a
java.sql.Connection
to a ZIO context directly, you can still do it using theunderlying
variable.object Ctx extends PostgresZioJdbcContext(Literal); import MyPostgresContext._
Ctx.underlying.run(qr1)
.provide(zio.Has(conn: java.sql.Connection))Also, when using an underlying context, you can still use
onDataSource
to go from aConnection
dependency back to aDataSource
dependency (note that it no longer has to bewith Closable
).object Ctx extends PostgresZioJdbcContext(Literal); import MyPostgresContext._
Ctx.underlying.run(qr1)
.onDataSource
.provide(zio.Has(ds: java.sql.DataSource))Finally, that the
prepare
methods have been unaffected by this change. They still require aConnection
and have the signatureZIO[Connection, SQLException, PreparedStatement]
. This is because in order to work with the result of this value (i.e. to work withPreparedStatement
), the connection that created it must still be open.
3.10.0
- Defunct AsyncZioCache accidentally returned in #2174. Remove it.
- Open connection on ZIO blocking pool
- Line-up core API with ProtoQuill so child contexts can have same code
Migration Notes:
No externally facing API changes have been made.
This release aligns Quill's internal Context methods with the API defined in ProtoQuill and introduces
a root-level context (in the quill-sql-portable
module) that will be shared together with ProtoQuill.
Two arguments info: ExecutionInfo
and dc: DatasourceContext
have been introduced to all execute___
and prepare___
methods. For Scala2-Quill, these arguments should be ignored as they contain no
relevant information. ProtoQuill uses them in order to pass Ast information as well as whether
the query is Static or Dynamic into execute and prepare methods. In the future, Scala2-Quill may be enhanced
to use them as well.
3.9.0
- Pass Session to all Encoders/Decoders allowing UDT Encoding without local session variable in contexts e.g. ZIO and others
- Fixing on-conflict case with querySchema/schemaMeta renamed columns
Migration Notes:
This release modifies Quill's core encoding DSL however this is very much an internal API. If you are using MappedEncoder, which should be the case for most users, you will be completely unaffected. The MappedEncoder signatures remain the same.
Quill's core encoding API has changed:
// From:
type BaseEncoder[T] = (Index, T, PrepareRow) => PrepareRow
type BaseDecoder[T] = (Index, ResultRow) => T
// To:
type BaseEncoder[T] = (Index, T, PrepareRow, Session) => PrepareRow
type BaseDecoder[T] = (Index, ResultRow, Session) => T
That means that internal signature of all encoders has also changed. For example, the JdbcEncoder has changed:
// From:
case class JdbcEncoder[T](sqlType: Int, encoder: BaseEncoder[T]) extends BaseEncoder[T] {
override def apply(index: Index, value: T, row: PrepareRow) =
encoder(index + 1, value, row)
}
// To:
case class JdbcEncoder[T](sqlType: Int, encoder: BaseEncoder[T]) extends BaseEncoder[T] {
override def apply(index: Index, value: T, row: PrepareRow, session: Session) =
encoder(index + 1, value, row, session)
}
If you are writing encoders that directly implement BaseEncoder
, they will have to be modified with an
additional session: Session
parameter.
The actual type that
Session
is will vary. For JDBC this will beConnection
, forCassandra
this will be some implementation ofCassandraSession
, for other systems that use a entirely different session paradigm this will just beUnit
.
Again, if you are using MappedEncoders for all of your custom encoding needs, you will not be affected by this change.
3.8.0
- Use ZIO-Native Iterator chunking for JDBC result sets
- Remove 'with Blocking' from all signatures
- Update Microsoft SQL Server Docker image
Migration Notes:
The quill-jdbc-zio
contexts' .run
method was designed to work with ZIO in an idiomatic way. As such, the environment variable
of their return type including the zio.blocking.Blocking
dependency. This added a significant amount of complexity.
Instead of ZIO[Connection, SQLException, T]
, the return type became ZIO[Connection with Blocking, SQLException, T]
.
Instead of ZIO[DataSource with Closeable, SQLException, T]
, the return type became ZIO[DataSource with Closeable with Blocking, SQLException, T]
.
Various types such as QConnection
and QDataSource
were created in order to encapsulate these concepts but this only led to additional confusion.
Furthermore, actually supplying a Connection
or DataSource with Closeable
required first peeling off the with Blocking
clause, calling a .provide
,
and then appending it back on. The fact that a Connection needs to be opened from a Data Source (which will typically be a Hikari connection pool)
further complicates the problem because this aforementioned process needs to be done twice. All of leads to the clear conclusion that the with Blocking
construct has bad ergonomics. For this reason, the ZIO team has decided to drop the concept of with Blocking
in ZIO 2 altogether.
As a result of this, I have decided to drop the with Blocking
construct in advance. Quill queries resulting from the run(qry)
command and
still run on the blocking pool but with Blocking
is not included in the signature. This also means that and the need for QConnection
and QDataSource
disappears since they are now just Connection
and Datasource with Closeable
respectively. This also means that all the constructors on the corresponding objects e.g. QDataSource.fromPrefix("myDB")
are not consistent with
any actual construct in QIO, therefore they are not needed either.
Instead, I have introduced a simple layer-constructor called DataSourceLayer
which has a .live
implementation which converts
ZIO[Connection, SQLException, T]
to ZIO[DataSource with Closeable, SQLException, T]
by taking a connection from the
data-source and returning it immediately afterward, this is the analogue of what QDataSource.toConnection
use to do.
You can use it like this:
def hikariDataSource: DataSource with Closeable = ...
val zioConn: ZLayer[Any, Throwable, Connection] =
Task(hikariDataSource).toLayer >>> DataSourceLayer.live
run(people)
.provideCustomLayer(zioConn)
You can also use the extension method .onDataSource
(or .onDS
for short) to do the same thing:
def hikariDataSource: DataSource with Closeable = ...
run(people)
.onDataSource
.provide(Has(hikariDataSource))
Also, constructor-methods fromPrefix
, fromConfig
, fromJdbcConfig
and fromDataSource
are available on
DataSourceLayer
to construct instances of ZLayer[DataSource with Closeable, SQLException, Connection]
.
Combined with the toDataSource
construct, these provide a simple way to construct various Hikari pools from
a corresponding typesafe-config file application.conf
.
run(people)
.onDataSource
.provideLayer(DataSourceLayer.fromPrefix("testPostgresDB"))
Also note that the objects QDataSource
and QConnection
have not yet been removed. Instead, all of their methods
have been marked as deprecated and a comment on what calls using DataSourceLayer
/onDataSource
to use instead
have been added.
Cassandra:
Similar changes have been made in quill-cassandra-zio. CassandraZioSession with Blocking
has been replaced
with just CassandraZioSession
so now this is much easier to provide:
val session: CassandraZioSession = _
run(people)
.provide(Has(session))
The CassandraZioSession constructors however are all still fine to use:
val zioSessionLayer: ZLayer[Any, Throwable, CassandraZioSession] =
CassandraZioSession.fromPrefix("testStreamDB")
run(query[Person])
.provideCustomLayer(zioSessionLayer)
3.7.2
3.7.1
3.7.0
Migration Notes: In order to properly accommodate a good ZIO experience, several refactorings had to be done to various internal context classes, none of these changes modify class structure in a breaking way.
The following was done for quill-jdbc-zio
- Query Preparation base type definitions have been moved out of
JdbcContextSimplified
intoJdbcContextBase
which inherits a class namedStagedPrepare
which defines prepare-types (e.g.type PrepareQueryResult = Session => Result[PrepareRow]
). - This has been done so that the ZIO JDBC Context can define prepare-types via the ZIO
R
parameter instead of a lambda parameter (e.g.ZIO[QConnection, SQLException, PrepareRow]
a.k.a.QIO[PrepareRow]
). - In order prevent user-facing breaking changes. The contexts in
BaseContexts.scala
now extend from bothJdbcContextSimplified
(indirectly) andJdbcContextBase
thus preserving theSession => Result[PrepareRow]
prepare-types. - The context
JdbcContextSimplified
now contains theprepareQuery/Action/BatchAction
methods used by all contexts other than the ZIO contexts which define these methods independently (since they use the ZIOR
parameter). - All remaining context functionality (i.e. the
run(...)
series of functions) has been extracted out intoJdbcRunContext
which the ZIO JDBC Contexts inZioJdbcContexts.scala
as well as all the other JDBC Contexts now extend.
Similarly for quill-cassandra-zio
- The CassandraSessionContext on which the CassandraMonixContext and all the other Cassandra contexts are based on keeps internal state (i.e. session, keyspace, caches).
- This state was pulled out as separate classes e.g.
SyncCache
,AsyncFutureCache
(the ZIO equivalent of which isAsyncZioCache
). - Then a
CassandraZioSession
is created which extends these state-containers however, it is not directly a base-class of theCassandraZioContext
. - Instead it is returned as a dependency from the CassandraZioContext run/prepare commands as part of the type
ZIO[CassandraZioSession with Blocking, Throwable, T]
(a.k.aCIO[T]
). This allows the primary context CassandraZioContext to be stateless.
3.6.1
Migration Notes:
- Memoization of Quats should improve performance of dynamic queries based on some profiling analysis. This change should not have any user-facing changes.
3.6.0
This description is an aggregation of the 3.6.0-RC1, RC2 and RC3 as well as several new items.
- Quat Enhancements to Support Needed Spark Use Cases
- Add support for scala 2.13 to quill-cassandra-lagom
- Change all Quat fields to Lazy
- Smart serialization based on number of Quat fields
- Better Dynamic Query DSL For Quats on JVM
- Fix incorrect Quat.Value parsing issues
- Fix Query in Nested Operation and Infix
- Fix Logic table, replicate Option.getOrElse optimization to Boolean Quats
- Fixes + Enhancements to Boolean Optional APIs
- Fix for Boolean Quat Issues
Migration Notes:
The Cassandra base UDT class
io.getquill.context.cassandra.Udt
has been moved toio.getquill.Udt
.When working with databases which do not support boolean literals (SQL Server, Oracle, etc...) infixes representing booleans will be converted to equality-expressions.
For example:
query[Person].filter(p => sql"isJoe(p.name)".as[Boolean])
// SELECT ... FROM Person p WHERE isJoe(p.name)
// Becomes> SELECT ... FROM Person p WHERE 1 = isJoe(p.name)This is because the aforementioned databases do not directly support boolean literals (i.e. true/false) or expressions that yield them.
In some cases however, it is desirable for the above behavior not to happen and for the whole infix statement to be treated as an expression. For example
query[Person].filter(p => sql"${p.age} > 21".as[Boolean])
// We Need This> SELECT ... FROM Person p WHERE p.age > 21
// Not This> SELECT ... FROM Person p WHERE 1 = p.age > 21In order to have this behavior, instead of
sql"...".as[Boolean]
, usesql"...".asCondition
.query[Person].filter(p => sql"${p.age} > 21".asCondition)
// We Need This> SELECT ... FROM Person p WHERE p.age > 21If the condition represents a pure function, be sure to use
sql"...".pure.asCondition
.
3.6.0-RC3
- Add support for scala 2.13 to quill-cassandra-lagom
- Change all Quat fields to Lazy
- Smart serialization based on number of Quat fields
- Better Dynamic Query DSL For Quats on JVM
3.6.0-RC2
Migration Notes:
When working with databases which do not support boolean literals (SQL Server, Oracle, etc...) infixes representing booleans will be converted to equality-expressions.
For example:
query[Person].filter(p => sql"isJoe(p.name)".as[Boolean])
// SELECT ... FROM Person p WHERE isJoe(p.name)
// Becomes> SELECT ... FROM Person p WHERE 1 = isJoe(p.name)This is because the aforementioned databases do not directly support boolean literals (i.e. true/false) or expressions that yield them.
In some cases however, it is desirable for the above behavior not to happen and for the whole infix statement to be treated as an expression. For example
query[Person].filter(p => sql"${p.age} > 21".as[Boolean])
// We Need This> SELECT ... FROM Person p WHERE p.age > 21
// Not This> SELECT ... FROM Person p WHERE 1 = p.age > 21In order to have this behavior, instead of
sql"...".as[Boolean]
, usesql"...".asCondition
.query[Person].filter(p => sql"${p.age} > 21".asCondition)
// We Need This> SELECT ... FROM Person p WHERE p.age > 21If the condition represents a pure function, be sure to use
sql"...".pure.asCondition
.This release is not binary compatible with any Quill version before 3.5.3.
Any code generated by the Quill Code Generator with
quote { ... }
blocks will have to be regenerated with this Quill version if generated before 3.5.3.In most SQL dialects (i.e. everything except Postgres) boolean literals and expressions yielding them are not supported so statements such as
SELECT foo=bar FROM ...
are not supported. In order to get equivalent logic, it is necessary to user case-statements e.g.SELECT CASE WHERE foo=bar THEN 1 ELSE 0`.
On the other hand, in a WHERE-clause, it is the opposite:
SELECT ... WHERE CASE WHEN (...) foo ELSE bar`
is invalid and needs to be rewritten. Naively, a
1=
could be inserted:SELECT ... WHERE 1 = (CASE WHEN (...) foo ELSE bar)
Note that this behavior can disabled via the
-Dquill.query.smartBooleans
switch when issued during compile-time for compile-time queries and during runtime for runtime queries.Additionally, in certain situations, it is far more preferable to express this without the
CASE WHEN
construct:SELECT ... WHERE ((...) && foo) || !(...) && foo
This is because CASE statements in SQL are not sargable and generally cannot be well optimized.
A large portion of the Quill DSL has been moved outside of QueryDsl into the top level under the
io.getquill
package. Due to this change, it may be necessary to importio.getquill.Query
if you are not already importingio.getquill._
.
3.6.0-RC1
- Fix Query in Nested Operation and Infix
- Fix Logic table, replicate Option.getOrElse optimization to Boolean Quats
- Fixes + Enhancements to Boolean Optional APIs
- Fix for Boolean Quat Issues
Migration Notes:
This release is not binary compatible with any Quill version before 3.5.3.
Any code generated by the Quill Code Generator with
quote { ... }
blocks will have to be regenerated with this Quill version if generated before 3.5.3.In most SQL dialects (i.e. everything except Postgres) boolean literals and expressions yielding them are not supported so statements such as
SELECT foo=bar FROM ...
are not supported. In order to get equivalent logic, it is necessary to user case-statements e.g.SELECT CASE WHERE foo=bar THEN 1 ELSE 0`.
On the other hand, in a WHERE-clause, it is the opposite:
SELECT ... WHERE CASE WHEN (...) foo ELSE bar`
is invalid and needs to be rewritten. Naively, a
1=
could be inserted:SELECT ... WHERE 1 = (CASE WHEN (...) foo ELSE bar)
Note that this behavior can disabled via the
-Dquill.query.smartBooleans
switch when issued during compile-time for compile-time queries and during runtime for runtime queries.Additionally, in certain situations, it is far more preferable to express this without the
CASE WHEN
construct:SELECT ... WHERE ((...) && foo) || !(...) && foo
This is because CASE statements in SQL are not sargable and generally cannot be well optimized.
A large portion of the Quill DSL has been moved outside of QueryDsl into the top level under the
io.getquill
package. Due to this change, it may be necessary to importio.getquill.Query
if you are not already importingio.getquill._
.
3.5.3
Please skip this release and proceed directly to the 3.6.0-RC line. This release was originally a test-bed for the new Quats-based functionality which was supposed to be a strictly internal mechanism. Unfortunately multiple issues were found. They will be addressed in the 3.6.X line.
- Adding Quill-Application-Types (Quats) to AST
- Translate boolean literals
- breakdown caseclasses in groupBy clause
- allowed distinct to be placed on an infix
- Change Subquery Expansion to be Quat-based
- Use quats to expand nested queries in Spark
- Fixed bug where alias of filter clause did not match alias of inner query.
- Add default implementations so Query can be more easily inherited from Dotty
- Monix streaming with NDBC
- Fix SqlServer snake case - OUTPUT i_n_s_e_r_t_e_d.id
Migration Notes:`
- Quill 3.5.3 is source-compatible but not binary-compatible with Quill 3.5.2.
- Any code generated by the Quill Code Generator with
quote { ... }
blocks will have to be regenerated with Quill 3.5.3 as the AST has substantially changed. - The implementation of Quill Application Types (Quats) has changed the internals of nested query expansion. Queries
with a
querySchema
or aschemaMeta
will be aliased between nested clauses slightly differently. Given:Before:case class Person(firstName:String, lastName:String)
val ctx = new SqlMirrorContext(PostgresDialect, Literal)After:SELECT x.first_name, x.last_name FROM (
SELECT x.first_name, x.last_name FROM person x) AS xNote however that the semantic result of the queries should be the same. No user-level code change for this should be required.SELECT x.firstName, x.lastName FROM (
SELECT x.first_name AS firstName, x.last_name AS lastName FROM person x) AS x
3.5.2
- Add support jasync-sql for postgres
- Add quill-jasync-mysql
- Delete returning
- Fix SqlServer snake case - OUTPUT i_n_s_e_r_t_e_d.id
- Add translate to NDBC Context
- Apply NamingStrategy after applying prefix
- Remove use of
Row#getAnyOption
fromFinaglePostgresDecoders
- Better error message about lifting for enum types
- More 2.13 modules
Migration Notes:
- Much of the content in
QueryDsl
has been moved to the top-level for better portability with the upcoming Dotty implementation. This means that things likeQuery
are no longer part ofContext
but now are directly in theio.getquill
package. If you are importingio.getquill._
your code should be unaffected. - Custom decoders written for Finagle Postgres no longer require a
ClassTag
.
3.5.1
3.5.0
- Ndbc Postgres Support
- MS SQL Server returning via OUTPUT
- Pretty Print SQL Queries
- Fix shadowing via aggressive uncapture
- Fix Issues with Short
- Pull Oracle jdbc driver from Maven Central
3.4.10
3.4.9
3.4.8
- Additional Fixes for Embedded Entities in Nested Queries
- Fix java.sql.SQLException corner case
- Feature/local time support
- Update monix-eval, monix-reactive to 3.0.0
Documentation Updates:
Migration Notes:
- Monix 3.0.0 is not binary compatible with 3.0.0-RC3 which was a dependency of Quill 3.4.7. If you are using the Quill Monix modules, please update your dependencies accordingly.
3.4.7
3.4.6
3.4.5
3.4.4
3.4.3
3.4.2
Migration Notes:
NamingStrategy
is no longer applied on column and table names defined inquerySchema
, all column and table names defined inquerySchema
are now final. If you are relying on this behavior to name your columns/tables correctly, you will need to update yourquerySchema
objects.
3.4.1
Migration Notes:
- Nested sub-queries will now have their terms re-ordered in certain circumstances although the functionality of the entire query should not change. If you have deeply nested queries with Infixes, double check that they are in the correct position.
3.4.0
Migration Notes:
- Infixes are now not treated as pure functions by default. This means wherever they are used, nested queries may be created.
You can use
.pure
(e.g.sql"MY_PURE_UDF".pure.as[T]
) to revert to the previous behavior. See the Infix section of the documentation for more detail.
3.3.0
- Returning Record
- Change == and != to be Scala-idiomatic
- Optimize === comparisons when ANSI behavior assumed
- API to get PreparedStatement from Query for Low Level Use-cases
- Add BoundStatement support for all context.
- Only decode when field is non-null
- Fix support of nested transactions in Finagle-Postgres
- Returning shadow fix
- Fix SQL Server Subqueries with Order By
- Explicitly pass AsyncContext type params
- Remove unneeded Tuple reduction clause
- Fix join subquery+map+distinct and sortBy+distinct
- Fix Java9 depreciation message
Noteworthy Version Bumps:
- monix - 3.0.0-RC3
- cassandra-driver-core - 3.7.2
- orientdb-graphdb - 3.0.21
- postgresql - 42.2.6
- sqlite-jdbc - 3.28.0
Migration Notes:
- The
returning
method no long excludes the specified ID column from the insertion as it used to. Use thereturningGenerated
method in order to achieve that. See the 'Database-generated values' section of the documentation for more detail. - The
==
method now works Scala-idiomatically. That means that when twoOption[T]
-wrapped columns are compared,None == None
will now yieldtrue
. The===
operator can be used in order to compareOption[T]
-wrapped columns in a ANSI-SQL idiomatic way i.e.None == None := false
. See the 'equals' section of the documentation for more detail.
3.2.0
- Allow == for Option[T] and/or T columns
- Introducing Code Generator
- Fix variable shadowing issue in action metas
- Change effect to protected
- Update spark-sql to 2.4.1
- Update orientdb-graphdb to 3.0.17
- Update sqlite-jdbc to 3.27.2.1
3.1.0
- oracle support
- quill cassandra for lagom
- Fix the problem with re-preparing already prepared statements
- Rely on ANSI null-fallthrough where possible
- Fix for non-fallthrough null operations in map/flatMap/exists
- Move basic encoders into EncodingDsl
- Make string column name as property
- Update MySQL driver/datasource
- Provide a better "Can't tokenize a non-scalar lifting" error message
3.0.1
3.0.0
- First-class support for dynamic queries
- support dynamic strings within infix
- Create a streaming module for Monix over JDBC - combined approach
- Better implementation of Spark nested objects.
- Spark 2.4 (with Scala 2.12 support)
- Create quill-cassandra-monix
- Move
io.getquill.CassandraStreamContext
intoquill-cassandra-streaming-monix
module - filterIf method for dynamic queries
- Make UDT encoding to support options
- fix column name conflict
- #1204 add explicit
AS
for aliases (except table context) - sqlite dialect - translate boolean literals into 1/0
- sqlite dialect - ignore null ordering
- fail is property is not a case accessor
- verify table references
- fix property renaming for nested queries within infixes
- expand map.distinct
- quill-spark: fix groupby with multiple columns
- quill-spark: escape strings
- StatementInterpolator performance improvements
- fix async transactions for scala future + io monad
- Update orientdb-graphdb to 3.0.13
- update guava version to 27.0.1-jre
- documentation improvements
Migration notes
io.getquill.CassandraStreamContext
is moved intoquill-cassandra-monix
module and now uses Monix 3.io.getquill.CassandraMonixContext
has been introduced which should eventually replaceio.getquill.CassandraStreamContext
.- Spark queries with nested objects will now rely on the star
*
operator andstruct
function to generate sub-schemas as opposed to full expansion of the selection. - Most functionality from
JdbcContext
has been moved toJdbcContextBase
for the sake of re-usability.JdbcContext
is only intended to be used for synchronous JDBC.
2.6.0
- add noFailFast option to FinagleMysqlContextConfig
- add transactionWithIsolation to FinagleMysqlContext
- Add encoding between java.time.ZonedDateTime and java.util.Date
- Fix Infix causing ignoring renamings
- Cassandra async improvements
- Add upsert support for SQLite
- add IO.lift
- Minor performance improvements
- Add encoder/decoder for Byte
- Use Option.getOrElse(boolean) to generate ... OR IS [NOT] NULL queries
- Upgrade finagle to 18.8.0
- Fix renaming fields with schema/query meta for queries where unary/binary operation produces nested query
- scala-js 0.6.24
- Add question mark escaping for Spark
- Allow mapping MySQL
TIMESTAMP
andDATETIME
to JodaDateTime
type. - added error message example in the documentation.
- Wrong timeout configs
- Fix unnecessary nesting of infix queries
Migration notes
- When the infix starts with a query, the resulting sql query won't be nested
2.5.4
- Adds master-slave capability to FinagleMysqlContext
- Fix concatenation operator for SQL Server
- Use PreparedStatement.getConnection for JDBC Array Encoders
- CassandraSessionContext : change session to a lazy val
2.5.0, 2.5.1, 2.5.2, and 2.5.3
Broken releases, do not use.
2.4.2
2.4.1
- Add support of upsert for Postgres and MySQL
- Add flatMap, flatten, getOrElse and Option.apply
quill-cassandra
: Add encoding forByte
andShort
- Fix renaming aggregated properties in groupBy with custom querySchema
- Change referencing
super.prepare
call tothis.prepare
in quill-cassandra contexts - Add connectTimeout option into FinagleMysqlContextConfig
2.3.3
- Dependency updates
- update finagle-postgres to 0.7.0
- fixing unions with Ad-Hoc tuples
- Fix removing assignment in returning insert if embedded field has columns with the same name as in parent case class
2.3.2
- Simplify multiple
AND
OR
sql generation - Fix SQLServer take/drop SQL syntax
- Fix for Ad-Hoc Case Class producing Dynamic Queries
- Fix throwing exception instead of failed future in cassandra async prepare
- Fix invalid alias with distinct
- Log errors instead of throwing exception directly in several places
- Update finagle to 17.12.0
2.3.1
- Fix Ad-Hoc Case Classes for Spark
- Make the error reporting of comparing
Option
tonull
to point actual position - Fix postgres query probing failing for queries with wildcards
- Dependency updates
- Update finagle to 17.11.0
2.3.0
2.2.0
- Fix StackOverflowError in select distinct with aggregation
- Add support of java.time.Instant/java.time.LocalDate for quill-casandra
- Fix select query for unlimited optional embedded case classes
concatMap
,startsWith
, andsplit
support- Upgrade finagle to 17.10.0
2.1.0
- Spark SQL support
- Add support of postgres sql arrays operators
- Fix reversed log parameter binds
- Fix renaming properties for unlimited optional and raw
Embedded
case classes - Improve coverage
- Dependency updates
- Converge of PostgreSQL and MySQL behavior
2.0.0
We're proud to announce the Quill 2.0. All bugs were fixed, so this release doesn't have any known bugs!
- IO monad
- fall back to dynamic queries if dialect/naming isn't available
- Cassandra UDT encoding
- Add support of 'contains' operation on Cassandra collections
- Add org.joda.time.DateTime and java.time.ZonedDateTime encoding for quill-async-postgres
- Update dependencies
- give a better error message for option.get
- Remove OrientDB async context
- remove anonymous class support
- Remove client.ping from the FinagleMysqlContext constructor
Fixes
#872, #874, #875, #877, #879, #889, #890, #892, #894, #897, #899, #900, #903, #902, #904, #906, #907, #908, #909, #910, #913, #915, #917, #920, #921, #925, #928
Migration notes
- Sources now take a parameter for idiom and naming strategy instead of just type parameters. For instance,
new SqlSource[MysqlDialect, Literal]
becomesnew SqlSource(MysqlDialect, Literal)
. - Composite naming strategies don't use mixing anymore. Instead of the type
Literal with UpperCase
, use parameter valueNamingStrategy(Literal, UpperCase)
. - Anonymous classes aren't supported for function declaration anymore. Use a method with a type parameter instead. For instance, replace
val q = quote { new { def apply[T](q: Query[T]) = ... } }
bydef q[T] = quote { (q: Query[T] => ... }
1.4.0
- Allow unlimited nesting of embedded case classes and optionals
- Accept traversables for batch action
- Add joda time encoding to
quill-async
- Remove unnecessary
java.sql.Types
usage in JDBC decoders - Add mappedEncoder and mappedDecoder for AnyVal
- Support contains, exists, forall for optional embedded case classes with optional fields
- Improve error message for "Can't expand nested value ..." error
- Improve error message for query probing
- Report the exactly tree position while typechecking the query
- Fix inserting single auto generated column
- Update finagle to 7.0.0
- Dependency updates
Migration notes
quill-async
contexts:java.time.LocalDate
now supports onlydate
sql types,java.time.LocalDateTime
- onlytimestamp
sql types. Joda times follow this conventions accordingly. Exception is made tojava.util.Date
it supports bothdate
andtimestamp
types due to historical moments (java.sql.Timestamp
extentsjava.util.Date
).quill-jdbc
encoders do not acceptjava.sql.Types
as a first parameter anymore.
1.3.0
- SQLServer support
- OrientDB support
- Query bind variables logging
- Add url configuration property for quill-async
- Add support infix for batch actions
- Better support for empty lifted queries
- SQLLite 3.18.0
- Fix nested query stack overflow
- Performance optimization of Interleave
- Performance optimization of ReifyStatement
- Fix invalid nested queries with take/drop
- Fix NPE when using nested quoted binding
- Make
withConnection
method protected in AsyncContext
1.2.1
- upgrade finagle-postgres to 0.4.2
- add collections support for row elements (SQL Arrays, Cassandra Collection)
- allow querySchema/schemaMeta to rename optional embedded case classes
- make Quill compatible with Scala 2.12.2
- upgrade finagle-mysql to 6.44.0
1.1.1
see migration notes below
- avoid dynamic query generation for option.contains
- fix forall behaviour in quotation
- change query compilation log level to debug
- fix infix query compilation
- add support for Cassandra DATE type
- fix finagle timezone issues
- add max prepare statement configuration
- upgrade finagle-mysql to 6.43.0
- fix compilation issue when import List type
- upgrade cassandra-driver to 3.2.0
- apply NamingStrategy to returning column
- upgrade scala to 2.11.11
- fix finagle mysql context constructor with timezone
- rename Cassandra property address translater to translator
- fix timezone handling for finagle-mysql)
Migration notes
- Cassandra context property
ctx.session.addressTranslater
is renamed toctx.session.addressTranslator
1.1.0
see migration notes below
- materialize encoding for generic value classes
- sbt option to hide debug messages during compilation
- support Option.contains
- recursive optional nested expanding
- apply naming strategy to column alias
- fix existing and add missing encoders and decoders for java.util.UUID
- upgrade finagle-postgres to 0.3.2
Migration notes
- JDBC contexts are implemented in separate classes -
PostgresJdbcContext
,MysqlJdbcContext
,SqliteJdbcContext
,H2JdbcContext
- all contexts are supplied with default
java.util.UUID
encoder and decoder
1.0.1
- include SQL type info in Encoder/Decoder
- make encoder helpers and wrapper type public for quill-finagle-postgres
- fix property renaming normalization order
- workaround compiler bug involving reflective calls
- fix flat joins support
- encoders and decoders refactoring
- avoid alias conflict for multiple nested explicit joins
- avoid merging filter condition into a groupBy.map
- move Embedded from
io.getquill.dsl.MetaDsl
inner context toio.getquill
package - make
currentConnection
protected - add abstract encoders/decoders to CassandraContext and uuid mirror encoder/decoder
- made the SQL types for AsyncEncoder/AsyncDecoder generic
1.0.0-RC1 - 20-Oct-2016
- introduce
finagle-postgres
- introduce meta dsl
- expand meta dsl
- encoder for java 8 LocalDate & LocalDateTime
- Upgraded to Monix 2.x
- Make withClient function not private
- pass ssl settings to async driver
Migration notes
- New API for schema definition:
query[Person].schema(_.entity("people").columns(_.id -> "person_id")
becomesquerySchema[Person]("People", _.id -> "person_id")
. Note that the entity name ("People") is now always required. WrappedValue[T]
no longer exists, Quill can now automatically encodeAnyVal
s.
0.10.0 - 5-Sep-2016
see migration notes below
- check types when parsing assignments and equality operations
- Update finagle-mysql to finagle 6.37.0
- Split quill-async into quill-async-mysql and quill-async-postgres
- cql: support
+
operator - cassandra context constructor with ready-made Cluster
- support forced nested queries
- support mapped encoding definition without a context instance
- fix class cast exception for returned values
- fix free variables detection for the rhs of joins
Migration notes
mappedEncoding
has been renamed toMappedEncoding
.- The way we add async drivers has been changed. To add mysql async to your project use
quill-async-mysql
and for postgre asyncquill-async-postgres
. It is no longer necessary to addquill-async
by yourself. - Action assignments and equality operations are now typesafe. If there's a type mismatch between the operands, the quotation will not compile.
0.9.0 - 22-Aug-2016
see migration notes below
- new encoding, macros refactoring, and additional fixes
- Refactor generated to returning keyword in order to return the correct type
- Allow finagle-mysql to use Long with INT columns
- create sub query if aggregation on distinct query
- upgrade dependency to finagle 6.36.0
- Make decoder function public
- set the scope of all cassandra context type definitions to public
- make the cassandra decoder fail when encountering a column with value null
- fix Option.{isEmpty, isDefined, nonEmpty} show on action.filter
- Encoder fix
- enclose operand-queries of SetOperation in parentheses
Migration notes
- The fallback mechanism that looks for implicit encoders defined in the context instance has been removed. This means that if you don't
import context._
, you have to change the specific imports to include the encoders in use. context.run
now receives only one parameter. The second parameter that used to receive runtime values now doesn't exist anymore. Uselift
orliftQuery
instead.- Use
liftQuery
+foreach
to perform batch actions and define contains/in queries. insert
now always receives a parameter, that can be a case class.
- Non-lifted collections aren't supported anymore. Example:
query[Person].filter(t => List(10, 20).contains(p.age))
. UseliftQuery
instead.
schema(_.generated())
has been replaced byreturning
.
0.8.0 / 17-Jul-2016
see migration notes below
- introduce contexts
- sqlite support
- scala.js support
- support
toInt
andtoLong
- quill-jdbc: support nested
transaction
calls - fix bind order for take/drop with extra param
- quotation: allow lifting of
AnyVal
s - make liftable values work for the cassandra module
- apply intermediate map before take/drop
- support decoding of optional single-value case classes
- make type aliases for
run
results public - fail compilation if query is defined outside a
quote
- fix empty sql string
Migration notes
This version introduces Context
as a replacement for Source
. This change makes the quotation creation dependent on the context to open the path for a few refactorings and improvements we're planning to work on before the 1.0-RC1
release.
Migration steps:
- Remove any import that is not
import io.getquill._
- Replace the
Source
creation by aContext
creation. See the readme for more details. All types necessary to create the context instances are provided byimport io.getquill._
. - Instead of importing from
io.getquill._
to create quotations, import from you context instanceimport myContext._
. The context import will provide all types and methods to interact with quotations and the database. - See the documentation about dependent contexts in case you get compilation errors because of type mismatches.
0.7.0 / 2-Jul-2016
- transform quoted reference
- simplify
finagle-mysql
action result type - provide default values for plain-sql query execution
- quotation: fix binding conflict
- don't consider
?
a binding if inside a quote - fix query generation for wrapped types
- use querySingle/query for parametrized query according to return type
- remove implicit ordering
- remove implicit from max and min
- support explicit
Predef.ArrowAssoc
call - added handling for string lists in ClusterBuilder
- add naming strategy for pluralized table names
- transform ConfiguredEntity
0.6.0 / 9-May-2016
- explicit bindings using
lift
- Code of Conduct
- dynamic type parameters
- support contains for Traversable
equals
support- Always return List for any type of query
- quill-sql: support value queries
- quill-sql:
in
/contains
- support empty sets - Support
Ord
quotation blockParser
off-by-one error- show ident instead of indent.toString
- decode bit as boolean
0.5.0 / 17-Mar-2016
- Schema mini-DSL and generated values
- Support for inline vals in quotation blocks
- Support for Option.{isEmpty, nonEmpty, isDefined}
- Tolerant function parsing in option operation
- quill-sql: rename properties and assignments
- quill-cassandra: rename properties and assignments
- Fix log category
- Accept unicode arrows
- Add set encoder to SqlSource
- Don't quote the source creation tree if query probing is disabled
- Bind
drop.take
according to the sql terms order - Avoid silent error when importing the source's implicits for the encoding fallback resolution
- Quotation: add identifier method to avoid wrong type refinement inference
- Unquote multi-param quoted function bodies automatically
0.4.1 / 28-Feb-2016
- quill-sql: h2 dialect
- support for auto encoding of wrapped types
- non-batched actions
distinct
support [0] [1]- postgres naming strategy
- quill-core: unquote quoted function bodies automatically
- don't fail if the source annotation isn't available
- fix custom aggregations
- quill-finagle-mysql: fix finagle mysql execute result loss
- quill-cassandra: stream source - avoid blocking queries
0.4.0 / 19-Feb-2016
- new sources creation mechanism
- simplified join syntax
- Comparison between Quill and other alternatives for CQL
contains
operator (sqlin
)- unary sql queries
- query probing is now opt-in
- quill-cassandra: upgrade Datastax Java Driver to version 3.0.0
- support implicit quotations with type parameters
- quill-cassandra: UUID support
- quill-async: more reasonable numeric type decodes
0.3.1 / 01-Feb-2016
0.3.0 / 26-Jan-2016
- quill-cassandra: first version of the module featuring async and sync sources
- quill-cassandra: reactive streams support via Monix
- quill-core: updates using table columns
- quill-core: explicit inner joins
- quill-core: configuration option to disable the compile-time query probing
- quill-core:
if/
elsesupport (sql
case/
when`) - quill-async: uuid encoding
- quill-core: custom ordering
- quill-core: expressions in sortBy
0.2.1 / 28-Dec-2015
- expire and close compile-time sources automatically
- Aggregation sum should return an Option
- Changed min/max implicit from Numeric to Ordering
- provide implicit to query case class companion objects directly
- don't fuse multiple
sortBy
s - actions now respect the naming strategy
0.2.0 / 24-Dec-2015
- Insert/update using case class instances
- Better IntelliJ IDEA support
- Implicit quotations
like
operator- string interpolation support
- Finagle pool configuration
- Allow empty password in Finagle Mysql client
- Bug fixes:
0.1.0 / 27-Nov-2015
- Initial release