Comment 52 for bug 550559

Revision history for this message
Edwin Chiu (edwin-chiu) wrote : Re: [Bug 550559] Re: hdd problems, failed command: READ FPDMA QUEUED

I'm just speaking from my own experience. I have WD drives in play and they
don't seem impacted at all, only the Seagates. As for other with issues, I
don't recall if those were related or not. With the new PSU, the problem
pretty much went away for almost 2 months, but has since resurfaced. I
rechecked all the cabling and everything looked fine. I've tried swapping
cabling, no difference. It's a bit of a mystery... for now, I will switch to
WD drives. That's a solution that works for me. Whether or not it works for
someone else, I have no idea. Why this is the case? Again, I don't really
have any logical reason why, just evidence from my own experience that the
problem doesn't manifest itself when using WD drives.

On Fri, Apr 1, 2011 at 10:39, Vasco <email address hidden> wrote:

> If this is indeed a power issue, then why does it manifest now? These
> two drives have been spinning in the same configuration for years,
> never had any problems.
>
> > Switching to a single 12V raile (max 50A) seems to have helped.
> > Doesn't really make sense, drive has max 2A draw, but its hard to
> > ignore the fact that it appears to make a difference...
> What does make a difference mean, is it gone or appears less often? It
> could be just coincidence, as it seems for me the problem is quite
> erratic.
>
> > Of course, only had these issues with seagate barracuda lp 2tb drives
> > so far (only seagate drives I have); sata2 variant of the drives.
> Other people here that don't have Seagates also have the problem, so
> we should be careful not to make a premature assumption that this
> problem has a causal relationship with Seagate drives. It could be
> that Seagates are somehow more susceptible to the phenomenon, but it
> looks like not limited to only Seagates.
>
> > If you can rig it, try to run the drives off a diff psu and see if
> > that works??? Definitely less than ideal....
>
> I don't have a spare PSU lying around.
>
> >
> >
> >
> > On 2011-04-01, Vasco <email address hidden> wrote:
> >>> Add to your boot params: libata.force=noncq
> >>> It's not a guarantee to work, just helps quite a bit.
> >> Reading the comments above I dont see this helping much.
> >>
> >>>
> >>> Also are you running in any sort of RAID configuration?
> >> Yes, I am. I have four disks attached to the controller; on each pair
> >> of disks two partitions are in a RAID 1.
> >>
> >>> I found a new PSU
> >>> helped a little as well, had very few incidents up until yesterday...
> >> I dont see a reason to assume this is related to the PSU. This very
> >> thread more or less proves this is a either a bug in linux kernel or
> >> in firmware/hardware.
> >>
> >>> I still blame Seagate drives as being part of the problem.
> >> I just checked and it turns out I do actually have two Seagate
> >> Barracuda 7200.10 disks (I thought I had only Samsung). And it is
> >> indeed one of those that is causing problems. This is interesting.
> >>
> >>>
> >>> On Thu, Mar 31, 2011 at 14:36, Vasco <email address hidden>
> wrote:
> >>>
> >>>> I can confirm this bug as well. I also have a Gigabyte mainbord with
> the
> >>>> SB700/800 chipset. I have no option to disable NCQ.
> >>>>
> >>>> System is running Ubuntu kernel 2.6.32-30-generic
> >>>>
> >>>> --
> >>>> You received this bug notification because you are a direct subscriber
> >>>> of the bug.
> >>>> https://bugs.launchpad.net/bugs/550559
> >>>>
> >>>> Title:
> >>>> hdd problems, failed command: READ FPDMA QUEUED
> >>>>
> >>>> Status in Ubuntu:
> >>>> Confirmed
> >>>>
> >>>> Bug description:
> >>>> Hello!
> >>>>
> >>>> I have a brand new computer. With a SSD device and a SATA hard drive,
> >>>> a Seagate Barracuda XT specifically 6Gb / s of 2TB. The latter is
> >>>> connected to a Marvell 9123 controller that I set AHCI mode in BIOS
>
> --
> You received this bug notification because you are a direct subscriber
> of the bug.
> https://bugs.launchpad.net/bugs/550559
>
> Title:
> hdd problems, failed command: READ FPDMA QUEUED
>
> Status in Ubuntu:
> Confirmed
>
> Bug description:
> Hello!
>
> I have a brand new computer. With a SSD device and a SATA hard drive,
> a Seagate Barracuda XT specifically 6Gb / s of 2TB. The latter is
> connected to a Marvell 9123 controller that I set AHCI mode in BIOS.
>
> I have the OS installed on the SSD device, but when you try to read
> the disc 2TB gives several bugs.
>
> I tried to change the disk to another controller and gives the same
> problem, I even removed the disk partition table, having the same
> fate.
>
> I checked the disc for flaws from Windows with hd tune and
> verification tool official record, and does not give me any errors.
>
> I have tested with kernel version 2.6.34-rc2 and it works properly
> with this disc.
>
> The errors given are the following:
>
> [ 9.115544] ata9: exception Emask 0x0 SAct 0xf SErr 0x0 action 0x10
> frozen
> [ 9.115550] ata9.00: failed command: READ FPDMA QUEUED
> [ 9.115556] ata9.00: cmd 60/04:00:d4:82:85/00:00:1f:00:00/40 tag 0 ncq
> 2048 in
> [ 9.115557] res 40/00:18:d3:82:85/00:00:1f:00:00/40 Emask 0x4
> (timeout)
> [ 9.115560] ata9.00: status: { DRDY }
> [ 9.115562] ata9.00: failed command: READ FPDMA QUEUED
> [ 9.115568] ata9.00: cmd 60/01:08:d1:82:85/00:00:1f:00:00/40 tag 1 ncq
> 512 in
> [ 9.115569] res 40/00:18:d3:82:85/00:00:1f:00:00/40 Emask 0x4
> (timeout)
> [ 9.115572] ata9.00: status: { DRDY }
> [ 9.115574] ata9.00: failed command: READ FPDMA QUEUED
> [ 9.115579] ata9.00: cmd 60/01:10:d2:82:85/00:00:1f:00:00/40 tag 2 ncq
> 512 in
> [ 9.115581] res 40/00:18:d3:82:85/00:00:1f:00:00/40 Emask 0x4
> (timeout)
> [ 9.115583] ata9.00: status: { DRDY }
> [ 9.115586] ata9.00: failed command: READ FPDMA QUEUED
> [ 9.115591] ata9.00: cmd 60/01:18:d3:82:85/00:00:1f:00:00/40 tag 3 ncq
> 512 in
> [ 9.115592] res 40/00:18:d3:82:85/00:00:1f:00:00/40 Emask 0x4
> (timeout)
> [ 9.115595] ata9.00: status: { DRDY }
> [ 9.115609] sd 8:0:0:0: [sdb] Result: hostbyte=DID_OK
> driverbyte=DRIVER_SENSE
> [ 9.115612] sd 8:0:0:0: [sdb] Sense Key : Aborted Command [current]
> [descriptor]
> [ 9.115616] Descriptor sense data with sense descriptors (in hex):
> [ 9.115618] 72 0b 00 00 00 00 00 0c 00 0a 80 00 00 00 00 00
> [ 9.115626] 1f 85 82 d3
> [ 9.115629] sd 8:0:0:0: [sdb] Add. Sense: No additional sense
> information
> [ 9.115633] sd 8:0:0:0: [sdb] CDB: Read(10): 28 00 1f 85 82 d4 00 00 04
> 00
> [ 9.115640] end_request: I/O error, dev sdb, sector 528843476
> [ 9.115643] __ratelimit: 18 callbacks suppressed
> [ 9.115646] Buffer I/O error on device sdb2, logical block 317299556
> [ 9.115649] Buffer I/O error on device sdb2, logical block 317299557
> [ 9.115652] Buffer I/O error on device sdb2, logical block 317299558
> [ 9.115655] Buffer I/O error on device sdb2, logical block 317299559
> [ 9.115671] sd 8:0:0:0: [sdb] Result: hostbyte=DID_OK
> driverbyte=DRIVER_SENSE
> [ 9.115674] sd 8:0:0:0: [sdb] Sense Key : Aborted Command [current]
> [descriptor]
> [ 9.115678] Descriptor sense data with sense descriptors (in hex):
> [ 9.115679] 72 0b 00 00 00 00 00 0c 00 0a 80 00 00 00 00 00
> [ 9.115687] 1f 85 82 d3
> [ 9.115690] sd 8:0:0:0: [sdb] Add. Sense: No additional sense
> information
> [ 9.115693] sd 8:0:0:0: [sdb] CDB: Read(10): 28 00 1f 85 82 d1 00 00 01
> 00
> [ 9.115700] end_request: I/O error, dev sdb, sector 528843473
> [ 9.115702] Buffer I/O error on device sdb2, logical block 317299553
> [ 9.115707] sd 8:0:0:0: [sdb] Result: hostbyte=DID_OK
> driverbyte=DRIVER_SENSE
> [ 9.115710] sd 8:0:0:0: [sdb] Sense Key : Aborted Command [current]
> [descriptor]
> [ 9.115714] Descriptor sense data with sense descriptors (in hex):
> [ 9.115716] 72 0b 00 00 00 00 00 0c 00 0a 80 00 00 00 00 00
> [ 9.115723] 1f 85 82 d3
> [ 9.115726] sd 8:0:0:0: [sdb] Add. Sense: No additional sense
> information
> [ 9.115729] sd 8:0:0:0: [sdb] CDB: Read(10): 28 00 1f 85 82 d2 00 00 01
> 00
> [ 9.115736] end_request: I/O error, dev sdb, sector 528843474
> [ 9.115738] Buffer I/O error on device sdb2, logical block 317299554
> [ 9.115743] sd 8:0:0:0: [sdb] Result: hostbyte=DID_OK
> driverbyte=DRIVER_SENSE
> [ 9.115746] sd 8:0:0:0: [sdb] Sense Key : Aborted Command [current]
> [descriptor]
> [ 9.115749] Descriptor sense data with sense descriptors (in hex):
> [ 9.115751] 72 0b 00 00 00 00 00 0c 00 0a 80 00 00 00 00 00
> [ 9.115759] 1f 85 82 d3
> [ 9.115762] sd 8:0:0:0: [sdb] Add. Sense: No additional sense
> information
> [ 9.115765] sd 8:0:0:0: [sdb] CDB: Read(10): 28 00 1f 85 82 d3 00 00 01
> 00
> [ 9.115771] end_request: I/O error, dev sdb, sector 528843475
> [ 9.115774] Buffer I/O error on device sdb2, logical block 317299555
> [ 16.243531] sd 8:0:0:0: timing out command, waited 7s
> [ 23.241557] sd 8:0:0:0: timing out command, waited 7s
>
>
> lsb_release -rd
> Description: Ubuntu lucid (development branch)
> Release: 10.04
> ignasi@ignasi-desktop:~$
>
> ProblemType: Bug
> DistroRelease: Ubuntu 10.04
> Package: yelp 2.29.5-0ubuntu3
> ProcVersionSignature: Ubuntu 2.6.32-17.26-generic 2.6.32.10+drm33.1
> Uname: Linux 2.6.32-17-generic x86_64
> NonfreeKernelModules: nvidia
> Architecture: amd64
> Date: Mon Mar 29 01:06:27 2010
> ExecutablePath: /usr/bin/yelp
> InstallationMedia: Ubuntu 10.04 "Lucid Lynx" - Beta amd64 (20100318)
> ProcEnviron:
> LANG=ca_ES.utf8
> SHELL=/bin/bash
> SourcePackage: yelp
>
> To unsubscribe from this bug, go to:
> https://bugs.launchpad.net/ubuntu/+bug/550559/+subscribe
>