Comment 16 for bug 255292

Revision history for this message
Andrew Bennetts (spiv) wrote :

John, my suspicion is that bzr isn't actually doing anything wrong (despite where this bug is filed), but instead that the SFTP conversation between paramiko and bazaar.launchpad.net (which is basically Twisted's Conch server) is going awry. i.e. the traceback doesn't seem to be saying that bzr is asking for 1,714,499,681 bytes, but instead that the SFTP client has received an SSH or SFTP packet with a length-prefix claiming to be that huge, apparently in response to a file open request.

jml, can you explain how you were able to reproduce this? I'm intrigued by your mention of "info over sftp"; the traceback in the original report is triggered during a branch unlock, which reads the .bzr/branch/lock/held/info file (to find out if the lock was broken while this client thought it held it). But 'bzr info' shouldn't acquire a write lock. If your tracebacks vary substantially to the original report it's probably worth attaching them to this report.

FWIW:

>>> struct.pack('>I', 1714499681)
'f10a'
>>> struct.pack('>I', 1714466917)
'f0\xb0e'

Which despite being mostly ascii bytes, doesn't look like anything I recognise. It would be good to somehow get a complete capture of the SFTP conversation in case that makes the problem more obvious...

One other thing I notice: paramiko's _read_packet method reads a 4-byte length-prefix, then reads the rest of the packet all at once; i.e. it calls: _read_all(4), _read_all(size). So that's why you see reads of 4, 32777, 4, 32777, 4, 32777, etc. But interestingly, not only are the final read sizes impossibly huge, they break this pattern:

"""
4
1714499681
1714466917
"""

Why didn't it read 4 bytes after reading the 1714499681 bytes?