Comment 18 for bug 471512

Revision history for this message
DanielW (dw-danielwinter) wrote :

Ok, I am not an Ubuntu user, but suffered from the same problem with archlinux. And I have found a solution (at least for me, hope it works for you too).

Short my problem:

Setup: Synology Diskstation 209 NAS connected over 1 Gigabit network to my pc

Read performance using smbclient: about 55 Megabyte/s
Read performance using kde-kio-smb: about 12 Megabyte/s
Read performance using cifs: about 5-6 Megabyte/s !

(For reference: with Windows 7 I get about 64 mb/s, with ftp it is about 70 mb/s)

I looked into it with wireshark to find out, that cifs reads in 4096 bytes blocks, while smbclient uses bigger blocks.

Setting rsize has not effect on it (always 4096 byte blocks).

Well, to make it short, here what changed it:

1. Setting CIFSMaxBufSize=130048 as kernel module option for the cifs module
 (don't know about Ubuntu but i believe a line "options cifs CIFSMaxBufSize=130048" in /etc/modprobe.d/modprobe.conf should do it)

2. Use direct and rsize=130048 as mount options

Without direct as mount option, it seems to ignore the value set for rsize. (

With these changes it get about 62 Megabyte/second with cifs :-)

Here a part from the cifs readme about the "direct" setting:

"Do not do inode data caching on files opened on this mount.
  This precludes mmapping files on this mount. In some cases
  with fast networks and little or no caching benefits on the
  client (e.g. when the application is doing large sequential
  reads bigger than page size without rereading the same data)
  this can provide better performance than the default
  behavior which caches reads (readahead) and writes
  (writebehind) through the local Linux client pagecache
  if oplock (caching token) is granted and held. Note that
  direct allows write operations larger than page size
  to be sent to the server."

I hope that works for you

Daniel