Status Update
Comments
ae...@gmail.com <ae...@gmail.com> #2
should be able to fetch an entity by doing
Model.get_by_uniquepropertyname and also
Model.get_or_insert_by_uniquepropertyname... like Rails does with
ActiveRecord (e.g., Book.find_or_create_by_isbn).
el...@google.com <el...@google.com> #3
across multiple fields (and an index would almost certainly be needed to support this
efficiently, anyway).
ae...@gmail.com <ae...@gmail.com> #4
should) still be specified in the schema/property constructor and the index
automatically generated.
ae...@gmail.com <ae...@gmail.com> #5
composite unique indexes/properties, especially for reference properties.
ae...@gmail.com <ae...@gmail.com> #6
ae...@gmail.com <ae...@gmail.com> #7
ae...@gmail.com <ae...@gmail.com> #8
ae...@gmail.com <ae...@gmail.com> #9
da...@google.com <da...@google.com> #10
immutable once it is set at creation. For example, say you have a User model with an
email address in it. You want the email address to be unique across all User
entities. If you use the email address as key name you are out of luck when the user
changes his/her email.
ae...@gmail.com <ae...@gmail.com> #11
key you have to update all referencing properties.
uniqueness is not the same use-case as a primary key
ae...@gmail.com <ae...@gmail.com> #12
feature. it's definitely one of the most widely used constraints in SQL schemas.
unfortunately, it would be pretty difficult to implement with the current datastore
architecture, due in large part to our index consistency model, described in
to chalk this up as another place the datastore doesn't provide traditional
relational database features.
at first, the key_name approach seemed tempting to me too. unfortunately, as edoardo
and bernddorn mention, it effectively makes those properties read only. having said
that, i suspect that "unique" properties like these actually are often read only, so
that might not be quite as big a drawback.
out of curiosity, would this feature be useful to anyone if it was limited to entity
groups? ie, you could specify that property values must be unique within each
individual entity group, but they could be repeated across entity groups?
ae...@gmail.com <ae...@gmail.com> #13
identifiers, say ISSN/ISBN etc) may actually change in the future (and they did).
That is why, in general, it is a very bad idea to use properties as key_names (that's
the all thinking behind using a simple integer as primary key in relational database
design). Moreover, there are unique properties (say email addresses) that you want
unique and are most likely to change.
The correct way of implementing this would be to have the unique constraint apply
within all entities of a specific KIND, not just an entity group as you suggested.
Another way of going at it would be to be able to query the datastore within a
transaction.
ae...@gmail.com <ae...@gmail.com> #14
uniqueness in a transaction safe manner, isnt it?
even though in my applications i do not check uniqueness transaction safe. i thought
that if this would happen on the datastore end it would be possible implement it on
the storage/index level, which would make it much less error prune than doing it on
the application level via properties.
ae...@gmail.com <ae...@gmail.com> #15
a query on that property, if count is greater than 1, delete self. If you worry about
exception, you can mark one property of the new object to indicate it's new and flip
it once verified unique. If there's contention of put, both object should delete them
self, there shouldn't be 2 object both marked old but with same property value.
Of course solve this on index level is much efficient.
da...@google.com <da...@google.com> #16
If so, would it be possible to create a property that does something like:
entity.index = eventual_id
instead of:
entity.put()
entity.index = entity.key().id()
ae...@gmail.com <ae...@gmail.com> #17
da...@google.com <da...@google.com> #18
same kind and the same parent, or root entities of the same kind.
in general, the full path is the only thing that's always guaranteed to be unique.
see
.
ae...@gmail.com <ae...@gmail.com> #19
entities is to have some slightly more complex guarantees about consistency than what
you currently provide with key_name. Is this feature really impossible to implement
in a scalable way or is this a specific problem of bigtable or the datastore?
What about allowing to safely change the key_name? Could you somehow allow for
locking two entity groups at the same time? If a deadlock occurs you could just
revert one of the transactions and let the other continue to run. This might need
some more thought though. :)
el...@google.com <el...@google.com>
ae...@gmail.com <ae...@gmail.com> #20
helpful. I did come up with a solution for my specific use case. I'm using a floating
point number as a score and need it to be unique because I am paging off of the
value. I also needed it to not be immutable, because scores can change. What I did
was override the put method, with a check to confirm the value being unique and if
not, then adjusted it and tried again.
def put(object):
valid = False
while valid == False:
cquery = db.GqlQuery('SELECT * FROM Story WHERE score = :1', object.score)
results = cquery.fetch(1)
if len(results) > 0:
value = value + .01
else:
valid = True
super(Story, Story).put(object)
One thing that might be useful from the appengine standpoint would be to add an
attribute similar to validator for properties, like unique_validator = None, which
could then be a hook for developers to create their own functions. Or if you needed a
more generic function that could raise an exception, it could be something similar to
the above, except it would raise an exception on the check for an existing value, and
it could then be up to the developer to catch the exception and adjust the value
accordingly before reattempting the put operation. Then you could just have a unique
= True/False attribute.
ae...@gmail.com <ae...@gmail.com> #21
class-level table annotation to identify which properties should be unique (either
alone or alongside others). This is usually used in addition to property-level
annotations, which have that unique=True/False property.
Before calling put(), the information from the annotation (or a simple property
_unique_constraints) is used to issue some queries, and, if no constraints are
violated, the entity is stored. If any constraint is violated an appropriate error
can be raised.
I have implemented a very basic version of my suggestion, which is enough for what I
need:
## start
class DbEntity(db.Model):
_unique_properties = None
timestamp = db.DateTimeProperty(auto_now=True)
def __init__(self, parent=None, key_name=None, _app=None, unique_properties = None,
**kwds):
super(DbEntity, self).__init__(parent, key_name, _app, **kwds)
if unique_properties:
logging.debug("Unique properties: %s" % unique_properties)
self._unique_properties = unique_properties
def put(self):
if self._unique_properties:
logging.debug('checking unique properties for : %s ...' % self.__class__)
for unique in self._unique_properties:
gqlString = 'WHERE %s = :1' % unique
param = eval('self.%s' % unique)
query = self.gql(gqlString, eval('self.%s' % unique))
otherObjects = query.fetch(10)
if len(otherObjects) > 0:
logging.error("Objects that violate the constraints: %s" % otherObjects)
raise db.BadPropertyError("Other instances of %s exist with %s = %s" %
(self.__class__, unique, param))
return super(DbEntity, self).put()
class Team(DbEntity):
name = db.StringProperty(required=True)
def __init__(self, parent=None, key_name=None, _app=None, **kwds):
# set the unique property names
super(Team, self).__init__(unique_properties = ['name'], **kwds)
## end
[1]
da...@google.com <da...@google.com> #22
This code contains a race condition and will not ensure uniqueness (it should, most
of the time, but there's no uniqueness guarantee).
ae...@gmail.com <ae...@gmail.com> #23
da...@google.com <da...@google.com> #24
ae...@gmail.com <ae...@gmail.com> #25
I understand there is a technique that involves creating an auxiliary entity to be used as a mutex for the secondary key. In JDO/JPA that is a bunch of ugly, obfuscating code that requires extra java classes, etc. JDO/JPA pretty much assume the semantics of a full SQL datastore, so this kind of kludge really shows that JDO/JPA is not a great solution for the Google datastore. The technique is much less offensive if you are using the low level api. But, all my code now uses JDO, so it it possible to implement the uniqueness annotation?
From my point of view, it would be OK to require an index and/or assume that writes to such entities would take longer. I understand that the stage 1 (commit phase) of write does not update indices, so that (as it now stands), the entity is effectively committed (logged and visible to get()) prior to any indices being updated. Indices are updated in phase 2, which can occur after the commit call returns. However, uniqueness constraints are very important, so much so that it would be OK with me that my commit waited for phase2 to finish before it returned. This would allow the indexing phase to enforce the uniqueness constraint. It would also be ok that the non-unique entity committed in phase 1 is visible to get() until phase2 rolls it back.
At at the very least, having a uniqueness constraint that only allows ONE of the entities into the index might be good enough. In other words, if I query by secondary key, I will get a single entity and it will always be the same one.
ae...@gmail.com <ae...@gmail.com> #26
The downfall is when the user wants to change their email address. We have to clone not just their account entity, but all the entities underneath it (of which there are potentially many thousands.) Of course, we want the user to be able to use their account during this time, so there's a lot of logic that pains me to have to keep around just to deal with the case of accounts in motion...
ae...@gmail.com <ae...@gmail.com> #27
da...@google.com <da...@google.com> #28
It's simple but has some possibility of leaving unique values unable to be used, though it will guarantee that a unique value is actually unique. That is, because there is no transaction between the Unique model and the model holding the unique values, a failure might cause a unique value to be used. It fails on the side of guaranteeing uniqueness.
The example is Python, but is simple enough that there will be a direct Java analogy.
ae...@gmail.com <ae...@gmail.com> #29
you create an entity group for each unique field. and then do a cross group transaction on checking/updating that entity group along with the entity group of your model.
this limits you to 4 unique fields per model, and is probably terrible for performance since you're effectively serializing writes across each field. but if you really really need uniqueness...
ba...@gmail.com <ba...@gmail.com> #30
@ndb.transactional
def put(self):
from google.appengine.ext.db import STRONG_CONSISTENCY
key = User.gql("WHERE ANCESTOR IS :1 AND email = :2", self.key.parent(), self.url).get(keys_only=True, read_policy=STRONG_CONSISTENCY)
assert key is None or key == self.key
ndb.Model.put(self)
def put(self):
from google.appengine.api import memcache
memcache_key = "UserEmail" + repr (self.url)
assert memcache.incr (memcache_key) is None
try:
from google.appengine.ext.db import STRONG_CONSISTENCY
key = User.gql("WHERE ANCESTOR IS :1 AND email = :2", self.key.parent(), self.url).get(keys_only=True, read_policy=STRONG_CONSISTENCY)
assert key is None or key == self.key
ndb.Model.put(self)
finally:
memcache.delete(memcache_key)
Of course I prefer the first one, and according to the documentation both should work.
Yet some people insist on the using memcache to ensure there are no race conditions.
The only reasoning I see is because of the query, but even so, in the same entity group, transaction are serializable according to the documentation, so it should work just fine. What am I missing?
ba...@gmail.com <ba...@gmail.com> #31
Markers also have a creation time, and an associated instance. If a marker for a unique combination already exists, then we check to see if its associated instance still exists (by doing an Ancestor, keys_only query on its key) before raising exception.
When creating new instances (where the Key id hasn't yet been generated) then we create a marker without an associated instance key, but we ignore such markers for unique constraint checking after a few seconds.
Obviously, this is a bit of a kludge, but it seems to work, and we've tried the approach on some pretty high traffic sites - it is costly though, both in performance and quota, but sometimes you really need to enforce uniqueness.
It would be much better if a similar approach could be implemented in the datastore itself. This is still a very sought after feature.
ae...@gmail.com <ae...@gmail.com> #32
I believe this is quite an important feature to be implemented.
I have run into similar constraints without it.
ba...@gmail.com <ba...@gmail.com> #33
da...@google.com <da...@google.com> #34
ba...@gmail.com <ba...@gmail.com> #35
da...@google.com <da...@google.com> #36
ba...@gmail.com <ba...@gmail.com> #37
da...@google.com <da...@google.com> #38
al...@beeper.com <al...@beeper.com> #39
ae...@gmail.com <ae...@gmail.com> #40
ap...@google.com <ap...@google.com> #41
ae...@gmail.com <ae...@gmail.com> #42
Are you were able to reproduce the issue and fix it or it is an other issue got fixed?
al...@beeper.com <al...@beeper.com> #43
I've spent an hour trying to repro the deadlock I mentioned in PRAGMA busy_timeout = <time in ms>
with the CAT S22. I have not been successful, so maybe that does fix my problem. I'm going to try releasing with BundledSQLiteDriver
again 🤞
ae...@gmail.com <ae...@gmail.com> #44
Hi. Are there any updates on this?
da...@google.com <da...@google.com> #45
Hey - 2.7.0-alpha13 has a change that should reduce the issue, but not guaranteed.
But I was not able to reproduce the issue with the project and data from
Things got busy so I have to dedicate more time to this issue, but its sort of a wild goose chase without a reliable repro.
ck...@gmail.com <ck...@gmail.com> #46
Starting with the latest version, 2.7.0-alpha13, we switched to BundledSQLiteDriver,
which produces hundreds of database-locked exceptions.
Non-fatal Exception: android.database.SQLException: Error code: 5, message: database is locked
at androidx.sqlite.driver.bundled.BundledSQLiteStatementKt.nativeStep(BundledSQLiteStatement.jvmAndroid.kt)
at androidx.sqlite.driver.bundled.BundledSQLiteStatementKt.access$nativeStep(BundledSQLiteStatement.jvmAndroid.kt)
at androidx.sqlite.driver.bundled.BundledSQLiteStatement.step(BundledSQLiteStatement.jvmAndroid.kt:100)
at app.resubs.core.database.dao.PendingNotificationDao_Impl.getAll$lambda$1(PendingNotificationDao_Impl.kt:94)
at androidx.room.util.DBUtil__DBUtil_androidKt$performSuspending$lambda$1$$inlined$internalPerform$1.invokeSuspend(DBUtil.kt:68)
at androidx.room.util.DBUtil__DBUtil_androidKt$performSuspending$lambda$1$$inlined$internalPerform$1.invoke(DBUtil.kt:8)
at androidx.room.util.DBUtil__DBUtil_androidKt$performSuspending$lambda$1$$inlined$internalPerform$1.invoke(DBUtil.kt:4)
at androidx.room.coroutines.ConnectionPoolImpl$useConnection$4.invokeSuspend(ConnectionPoolImpl.kt:144)
at androidx.room.coroutines.ConnectionPoolImpl$useConnection$4.invoke(ConnectionPoolImpl.kt:8)
at androidx.room.coroutines.ConnectionPoolImpl$useConnection$4.invoke(ConnectionPoolImpl.kt:4)
at kotlinx.coroutines.intrinsics.UndispatchedKt.startUndispatchedOrReturn(Undispatched.kt:43)
at kotlinx.coroutines.BuildersKt__Builders_commonKt.withContext(Builders.common.kt:166)
at kotlinx.coroutines.BuildersKt.withContext(Builders.kt)
at androidx.room.coroutines.ConnectionPoolImpl.useConnection(ConnectionPoolImpl.kt:144)
at androidx.room.coroutines.ConnectionPoolImpl$useConnection$1.invokeSuspend(ConnectionPoolImpl.kt:13)
at kotlin.coroutines.jvm.internal.BaseContinuationImpl.resumeWith(ContinuationImpl.kt:33)
at kotlinx.coroutines.internal.ScopeCoroutine.afterResume(Scopes.kt:35)
ae...@gmail.com <ae...@gmail.com> #47
Hi
da...@google.com <da...@google.com> #48
I did found that we were not configuring the busy_timeout in the initial connection which can lead to SQLITE_BUSY (error code 5). I've sent a cl to fix that.
I have another alternative / workaround for those with this issue which will also hint me to the root of the cause. In
val actualDriver = BundledSQLiteDriver()
val driverWrapper =
object : SQLiteDriver by actualDriver {
override fun open(fileName: String): SQLiteConnection {
return actualDriver.open(fileName).also { newConnection ->
newConnection.execSQL("PRAGMA busy_timeout = $customBusyTimeout")
}
}
}
roomDatabaseBuilder
.setDriver(driverWrapper)
Let me know if this helps or not.
ae...@gmail.com <ae...@gmail.com> #49
Hi
ap...@google.com <ap...@google.com> #50
Project: platform/frameworks/support
Branch: androidx-main
Author: Daniel Santiago Rivera <
Link:
Apply busy_timeout config for initial connection
Expand for full commit details
Apply busy_timeout config for initial connection
Fix an issue where busy_timeout was not being configured in the initial connection where schema validation is also done. The busy_timeout must be set on all connections to avoid SQLITE_BUSY.
Bug: 380088809
Test: BaseBuilderTest#setCustomBusyTimeout
Change-Id: I9320856f64363b05bcf6407eed0efe36ef3312a3
Files:
- M
room/integration-tests/multiplatformtestapp/src/commonTest/kotlin/androidx/room/integration/multiplatformtestapp/test/BaseBuilderTest.kt
- M
room/room-runtime/src/commonMain/kotlin/androidx/room/RoomConnectionManager.kt
- M
room/room-runtime/src/commonMain/kotlin/androidx/room/coroutines/ConnectionPool.kt
Hash: ba379355137898cf40a16bff549863ab6a55f093
Date: Mon Feb 10 09:32:51 2025
ae...@gmail.com <ae...@gmail.com> #51
Hi. I have applied this fix to my app and release for 10% of users. I got 4000 installs and for now I don't see this error. I am going to try with version 2.7.0-rc01.
se...@scrapgolem.com <se...@scrapgolem.com> #52
I'm still seeing this or a similar issue on 2.7.0-rc01. I haven't tried using AndroidSQLiteDriver
or any of the other workarounds listed here. Previously, I was using setQueryCoroutineContext(Dispatchers.IO.limitedParallelism(1))
which did solve the problem. I removed that after upgrading to 2.7.0-rc01.
I got this crash yesterday:
android.database.SQLException: Error code: 5, message: database is locked
at androidx.sqlite.driver.bundled.BundledSQLiteStatementKt.nativeStep(SourceFile)
at androidx.sqlite.driver.bundled.BundledSQLiteStatementKt.access$nativeStep(BundledSQLiteStatement.jvmAndroid.kt:0)
at androidx.sqlite.driver.bundled.BundledSQLiteStatement.step(BundledSQLiteStatement.jvmAndroid.kt:0)
at androidx.sqlite.driver.bundled.BundledSQLiteStatement.step(BundledSQLiteStatement.jvmAndroid.kt:100)
at androidx.sqlite.SQLite.execSQL(com.google.android.gms:play-services-mlkit-barcode-scanning@@18.3.1:56)
at androidx.room.coroutines.PooledConnectionImpl.endTransaction(ConnectionPoolImpl.kt:420)
at androidx.room.coroutines.PooledConnectionImpl.transaction(ConnectionPoolImpl.kt:388)
at androidx.room.coroutines.PooledConnectionImpl.isRecycled(ConnectionPoolImpl.kt:0)
at androidx.room.coroutines.PooledConnectionImpl.access$isRecycled(ConnectionPoolImpl.kt:0)
at androidx.room.coroutines.PooledConnectionImpl.withTransaction(ConnectionPoolImpl.kt:0)
at androidx.room.coroutines.PooledConnectionImpl.withTransaction(ConnectionPoolImpl.kt:344)
at kotlin.coroutines.intrinsics.IntrinsicsKt__IntrinsicsKt.getCOROUTINE_SUSPENDED
at androidx.room.util.DBUtil__DBUtil_androidKt$performInTransactionSuspending$3$invokeSuspend$$inlined$internalPerform$1.invokeSuspend(DBUtil.kt:0)
at androidx.room.util.DBUtil__DBUtil_androidKt$performInTransactionSuspending$3$invokeSuspend$$inlined$internalPerform$1.invokeSuspend(DBUtil.kt:59)
at androidx.room.util.DBUtil__DBUtil_androidKt$performInTransactionSuspending$3$invokeSuspend$$inlined$internalPerform$1.invoke(DBUtil.kt:0)
at androidx.room.util.DBUtil__DBUtil_androidKt$performInTransactionSuspending$3$invokeSuspend$$inlined$internalPerform$1.invoke(DBUtil.kt:0)
at kotlin.coroutines.intrinsics.IntrinsicsKt__IntrinsicsKt.getCOROUTINE_SUSPENDED
at androidx.room.coroutines.ConnectionPoolImpl$useConnection$4.invokeSuspend(ConnectionPoolImpl.kt:0)
at androidx.room.coroutines.ConnectionPoolImpl$useConnection$4.invokeSuspend(ConnectionPoolImpl.kt:144)
at androidx.room.coroutines.ConnectionPoolImpl$useConnection$4.invoke(ConnectionPoolImpl.kt:0)
at androidx.room.coroutines.ConnectionPoolImpl$useConnection$4.invoke(ConnectionPoolImpl.kt:0)
at kotlinx.coroutines.intrinsics.UndispatchedKt.startUndispatchedOrReturn(ChildStackFactory.kt:43)
at kotlinx.coroutines.BuildersKt__Builders_commonKt.withContext
at kotlinx.coroutines.BuildersKt.withContext(DebugStrings.kt:0)
at kotlinx.coroutines.BuildersKt__Builders_commonKt.withContext
at kotlinx.coroutines.BuildersKt.withContext(DebugStrings.kt:1)
at androidx.room.coroutines.ConnectionPoolImpl.useConnection(ConnectionPoolImpl.kt:144)
at androidx.room.RoomDatabase.useConnection$room_runtime_release(RoomDatabase.android.kt:0)
at androidx.room.RoomConnectionManager.useConnection
at androidx.room.RoomDatabase.useConnection$room_runtime_release(RoomDatabase.android.kt:587)
at kotlin.coroutines.intrinsics.IntrinsicsKt__IntrinsicsKt.getCOROUTINE_SUSPENDED
at androidx.room.util.DBUtil__DBUtil_androidKt$performInTransactionSuspending$3.invokeSuspend(DBUtil.android.kt:0)
at androidx.room.util.DBUtil__DBUtil_androidKt$performInTransactionSuspending$3.invokeSuspend(DBUtil.android.kt:243)
at kotlin.coroutines.jvm.internal.BaseContinuationImpl.resumeWith(ContinuationImpl.kt:33)
at kotlinx.coroutines.DispatchedTask.run(DispatchedTask.kt:0)
at kotlinx.coroutines.DispatchedTask.run(DispatchedTask.kt:100)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:644)
at java.lang.Thread.run(Thread.java:1012)
ae...@gmail.com <ae...@gmail.com> #53
Hi. I got a new error: Fatal Exception: android.database.SQLException: Error code: 5, message: Timed out attempting to acquire a reader connection.
Writer pool:
Xu1@d088033 (capacity=1)
[1] - ow@e204ff0
Status: Free connection
Reader pool:
Xu1@6c0ec8f (capacity=4)
[1] - ow@dd27b1c
Status: Acquired connection
Coroutine: [go2@7c92325, eo2{Active}@cf76cfa, Dispatchers.IO]
Acquired:
at lR.C(Unknown Source:310)
at iR.invokeSuspend(Unknown Source:13)
at dq.resumeWith(Unknown Source:8)
at TR1.o(Unknown Source:6)
at W.resumeWith(Unknown Source:22)
at dq.resumeWith(Unknown Source:31)
at xc0.run(Unknown Source:109)
at K2.run(Unknown Source:1024)
at qc2.run(Unknown Source:2)
at SV.run(Unknown Source:93)
[2] - ow@71a2d08
Status: Free connection
[3] - ow@c0dcca1
Status: Free connection
[4] - ow@bbf5ec6
Status: Free connection
da...@google.com <da...@google.com> #54
"Timed out attempting to acquire a reader connection" is a better error than "database is locked", hehe
It seems you have 3 zombie connections in your pool. I suspect this is because currently don't refill the pool when a connection dies. I'll work on a change to make sure the pool is always of the correct size. Thanks again for your patience on helping us make the library better.
ap...@google.com <ap...@google.com> #55
Project: platform/frameworks/support
Branch: androidx-main
Author: Daniel Santiago Rivera <
Link:
Reimplement suspending pool with Semaphore
Expand for full commit details
Reimplement suspending pool with Semaphore
This changes the implementation of Room's connection pool to use a Semaphore instead of a Channel. It makes the implementation simpler and removes the possibility of connections getting 'lost' due the the various branches needed to handle when a channel send / receive fails.
The Channel was also used as a 'queue' instead a circular array is used to push and pop connections acquired and released into the pool. The semaphore still controls the amount of connections being locked and unlocked and is the suspending API that waits for a connection to be free.
Bug: 322386871
Bug: 380088809
Test: BaseConnectionPoolTest
Change-Id: I9b7f2d44b511eb3e3d8cdaf8cb8905bb6203f8c5
Files:
- M
room/room-runtime/src/commonMain/kotlin/androidx/room/coroutines/ConnectionPoolImpl.kt
- M
room/room-runtime/src/commonTest/kotlin/androidx/room/coroutines/BaseConnectionPoolTest.kt
Hash: dc4715197f7869c87af3025acba91a0c8a57db4d
Date: Mon Mar 17 12:01:10 2025
da...@google.com <da...@google.com> #56
The next release (2.7.0-rc03
) will have a change that should alleviate the 'Timed out attempting to acquire a reader connection
' issue, I'll continue to keep this issue open so others can report if they are still experiencing it or not.
ae...@gmail.com <ae...@gmail.com> #57
Hi. Thank you for your fixes. I have released 2.7.0-rc01
for 100% of users. For now it have about 22000 installs and I don't see the error SQLException: Error code: 5, message: database is locked
anymore.
Description
Component used: RoomDB
Version used: Room - 2.7.0-alpha11, sqlite-bundled - 2.5.0-alpha11
Devices/Android versions reproduced on: Transsion Tecno, Nokia C21 Plus, Redmi A1+. Android 11-14.
Crash stack:
Please fix it.