Fixed
Status Update
Comments
ra...@google.com <ra...@google.com>
ra...@google.com <ra...@google.com> #2
Hi,
This looks really bad but w/o a repro app, we really cannot do anything about it. Is it possible for you to provide a sample app that reproduces your problem ?
This looks really bad but w/o a repro app, we really cannot do anything about it. Is it possible for you to provide a sample app that reproduces your problem ?
d....@infotech.team <d....@infotech.team> #3
Attached is a simple testing app. It inserts 10000 items into the db when activity starts and then displays them using paging. The db is also updated every 1 second. On the screen, if you drag the fast scroll bar up and down quickly a few times, it will eventually crash with this IndexOutOfBoundsException error.
su...@google.com <su...@google.com> #4
Thanks for the great repro case, tracked it down, investigating a fix.
Problematic behavior:
1) AsyncPagedListDiffer receives new list, takes snapshot for diffing (ui thread)
2) At roughly the same time:
2a) Diff snapshot of old snapshot vs new snapshot (bg thread)
2b) New data arrives in new list, way earlier in the list, so the list is essentially: new page, lots of empty pages, initial load
3) AsyncPagedListDiffer updates lastLoad position with diffutil position mapping
In 3, we try to map the lastLoad position from old snapshot to new snapshot, but fail to distinguish between new list vs new snapshot. Because of all the empty pages, that's a huge discrepancy. This is why the huge scrollbar + lots of swaps triggers this well - there are lots of swaps to new PagedLists while loads are happening very far away from initial load positions.
Will need to account for pages loaded in between time of snapshot, and time of swap.
Problematic behavior:
1) AsyncPagedListDiffer receives new list, takes snapshot for diffing (ui thread)
2) At roughly the same time:
2a) Diff snapshot of old snapshot vs new snapshot (bg thread)
2b) New data arrives in new list, way earlier in the list, so the list is essentially: new page, lots of empty pages, initial load
3) AsyncPagedListDiffer updates lastLoad position with diffutil position mapping
In 3, we try to map the lastLoad position from old snapshot to new snapshot, but fail to distinguish between new list vs new snapshot. Because of all the empty pages, that's a huge discrepancy. This is why the huge scrollbar + lots of swaps triggers this well - there are lots of swaps to new PagedLists while loads are happening very far away from initial load positions.
Will need to account for pages loaded in between time of snapshot, and time of swap.
Description
Version used:1.0.0-alpha07
Devices/Android versions reproduced on:
java.lang.IllegalStateException: Apps may not schedule more than 100 distinct jobs
at android.os.Parcel.readException(Parcel.java:1692)
at android.os.Parcel.readException(Parcel.java:1637)
at android.app.job.IJobScheduler$Stub$Proxy.schedule(IJobScheduler.java:158)
at android.app.JobSchedulerImpl.schedule(JobSchedulerImpl.java:42)
at androidx.work.impl.background.systemjob.SystemJobScheduler.scheduleInternal(SystemJobScheduler.java:126)
at androidx.work.impl.background.systemjob.SystemJobScheduler.schedule(SystemJobScheduler.java:95)
at androidx.work.impl.Schedulers.schedule(Schedulers.java:99)
at androidx.work.impl.utils.EnqueueRunnable.scheduleWorkInBackground(EnqueueRunnable.java:114)
at androidx.work.impl.utils.EnqueueRunnable.run(EnqueueRunnable.java:86)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1133)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:607)
at java.lang.Thread.run(Thread.java:761)
I already reported this bug multiple times but it is still not fixed. I attached sample project from my prevoious report. In our production app there is only one schedule operation(it calulculates worker tree once and schedules them all on the same thread frame) that you see in the sample app, but it still crashes.
I attached a sample project.
Run the app and press the button multiple times in a quick succession.