I confirm I'm having this problem too since migrating to Bionic from Xenial.
My setup is a bit analogous to the original bug reporter: I have 5 drives (22 TB) in RAID1, organized in around 10 subvolumes with up to 20 snapshots per subvolume.
After some hours running normally, at some point [btrfs-transaction] goes into D state, and everything btrfs-related slowly comes down to a stall, with any program trying to touch it ending up in D state too.
The call trace I have also references btrfs_qgroup_trace_extent_post.
I'm currently testing 5.0-rc8 from the Ubuntu ppa mainline, to see if the problem is still there.
Michael, did you end up reporting the problem upstream? I would be keen do to it on the btrfs mailing-list, as soon as I have the answer to whether this is fixed with 5.0 or not.
I confirm I'm having this problem too since migrating to Bionic from Xenial.
My setup is a bit analogous to the original bug reporter: I have 5 drives (22 TB) in RAID1, organized in around 10 subvolumes with up to 20 snapshots per subvolume.
After some hours running normally, at some point [btrfs-transaction] goes into D state, and everything btrfs-related slowly comes down to a stall, with any program trying to touch it ending up in D state too.
The call trace I have also references btrfs_qgroup_ trace_extent_ post.
I'm currently testing 5.0-rc8 from the Ubuntu ppa mainline, to see if the problem is still there.
Michael, did you end up reporting the problem upstream? I would be keen do to it on the btrfs mailing-list, as soon as I have the answer to whether this is fixed with 5.0 or not.