Slow cifs in 11.04

Bug #810606 reported by zzarko
186
This bug affects 73 people
Affects Status Importance Assigned to Milestone
cifs-utils (Ubuntu)
Expired
Undecided
Unassigned

Bug Description

I have two machines at home, one acting as server (with Ubuntu), and other acting as a client (Ubuntu/Windows). Before the upgrade to 11.04, both machines were with Ubuntu 10.10. I have a few samba shared directories, and the speed of copying from/to server was ~9-10MB/s from Ubuntu and from Windows.
After upgrading the server machine to 11.04, the speed remained the same in Ubuntu and in Windows. But, after upgrading Ubuntu on client machine, the speed dropped to about 500KB/s, which is unusable. If I mount from Nautilus with gvfs, I get ~5MB/s, which is still about half the speed I had before. From Windows, the speed is as it was, so Ubuntu client is to blame.
Additionally, mounting shares from Nautilus with gvfs always shows 0 bytes free on shared directories (although there is plenty of free space), and I cannot copy anything to it, just from it.
Currently, I have two mount points for those shares: one with cifs, so I can see the amount of free space (and for slow copying to server), and one with gvfs for (still slower than before the upgade) copying from server.

This bug was filled for 9.10 (https://bugs.launchpad.net/ubuntu/+source/linux/+bug/471512), but became obsolete and it was suggested that someone should fill a new bug. So, I did...

Revision history for this message
Bohdan (bohdan-linda) wrote :

I can confirm similar bug in 11.04:

$ iperf -c 192.168.11.1 -f MBytes
[ 3] 0.0-10.0 sec 695 MBytes 69.5 MBytes/sec
[ 3] 0.0-10.0 sec 683 MBytes 68.3 MBytes/sec
[ 3] 0.0-10.0 sec 693 MBytes 69.3 MBytes/sec

smbclient:
getting file \test of size 307200000 as test (28702.6 KiloBytes/sec) (average 28702.6 KiloBytes/sec)

mount via cifs:
307200000 bytes (307 MB) copied, 61.1143 s, 5.0 MB/s

games with CIFSMaxBufSize=130048 and rsize did not help

PS: this bug is confirmed since 2009-11-26 and is present in ubuntu since 9.10. Looking forward, that in 13.4 we will reopen the same bug again as current distro will not be supported anymore

Revision history for this message
Massimo Forti (slackwarelife) wrote :

Hi, can you post your /etc/fstab client file to see you mount option. Thanks many.

Changed in cifs-utils (Ubuntu):
assignee: nobody → Massimo Forti (maxforti)
Revision history for this message
zzarko (zzarko-gmail) wrote :

Here are my mount options:
//192.168.2.100/del1 /media/Del1 cifs credentials=/etc/credentials.del,rw,_netdev,uid=zzarko,gid=users 0 0
//192.168.2.100/del2 /media/Del2 cifs credentials=/etc/credentials.del,rw,_netdev,uid=zzarko,gid=users 0 0

Changed in cifs-utils (Ubuntu):
assignee: Massimo Forti (maxforti) → nobody
Revision history for this message
Jean- (jean-helou) wrote :

I know this is open source and anyone can help. I don't have the skillset to do it or I definitely would. This bug is not a blocker and power users can easily work around it using ftp for example.

But NAS , Gb ethernet and/or mutiple computers at home has become usual.
setting file shares in windows and in ubuntu is equally simple but the file transfer speed difference is obvious. As a first (or continued) experience, such a loss of performance is a huge let down.

After 3 years, there isn't even a begining of a lead as to where the problem really is, this is really worrysome.

I understand that 9.10 is no longer a supported distribution but forcing a new bug means that the bug won't stand out as much in the bug stats.

Anyway I get back home next week, and I experience the problem. I will be able to run all the tests you want with the following combinations:

Synology NAS to and from ubuntu
Synology NAS to and from windows
Windows to and from ubuntu

I will also provide whatever information you want but please help diagnose this

Revision history for this message
Massimo Forti (slackwarelife) wrote :

Hi, we understand your disappointment. Now we try to see a couple of things:

- You have created the file /etc/modprobe.d/cifs.conf with the inside option: "options cifs CIFSMaxBufSize = 130048"?
- You have put in /etc/fstab the following two options in the mount points: rsize=32768, wsize=32768?

For example, the two lines posted by zzarko don't seem have these two options mentioned above.

Let me know if the setting posted above has improved the speed of your transmission. Thank you.

Changed in cifs-utils (Ubuntu):
status: New → Incomplete
Revision history for this message
zzarko (zzarko-gmail) wrote :

I tried the tweaks as suggested by Massimo Forti, and I can say that they did the trick. The speed with cifs jumped to ~7.5 MB/s (read and write), which is much better than 500 KB/s with cifs I had until now and ~5 MB/s with gvfs (but, still, less than ~10 MB/s I had before). These settings almost solved the problem in my case ("almost" only because I had even greater transfer speed before, but don't get me wrong, this is a fantastic improvement).

Massimo, thanks again, you solved a very big problem of mine. I will try to see if there is something else that's limiting the speed to ~7.5 MB/s in the next few days and will report what I found.

Revision history for this message
zzarko (zzarko-gmail) wrote :

I forgot to mention, I got
[ 6078.439293] cifs: `' invalid for parameter `CIFSMaxBufSize'
with
"options cifs CIFSMaxBufSize = 130048"
After I removed the spaces, it was OK:
"options cifs CIFSMaxBufSize=130048"

Revision history for this message
Bohdan (bohdan-linda) wrote :

$ cat /etc/modprobe.d/cifs.conf
options cifs CIFSMaxBufSize=130048

options in fstab is
rsize=32768,wsize=32768,_netdev,password=,dir_mode=0775,file_mode=0664,gid=floppy,iocharset=utf8,uid=bohdan 0 0

$ dd if=/server/share/test of=./test
2000000+0 records in
2000000+0 records out
1024000000 bytes (1.0 GB) copied, 210.686 s, 4.9 MB/s

in win7 I am getting ~35MB/s

Revision history for this message
Ubuntu Uefi User (ubuntuuefiuser) wrote :

Confirming bug again as of 9.10 till 11.04.

Windows 7 --> samba share on ubuntu 11.04 = 70 MB/sec Read --> Write on Gigabit Lan
mount.cifs on ubuntu 11.04 <-- Windows 7 = 16 MB/sec Write <-- Read on Gigabit Lan

Using mount.cifs parameters:
mount.cifs $volume $backupvolme -o credentials=$file,ro,uid=user,gid=user,forcedirectio

Tried options cifs CIFSMaxBufSize=130048, gave errors in combination with mount option forcedirectio while using rsync. Otherwise no speed increase.

Revision history for this message
Massimo Forti (slackwarelife) wrote :

Hi at all. Thanks for your replay. Can you poste the result of "cat /proc/fs/cifs/DebugData". Thanks.

Revision history for this message
Massimo Forti (slackwarelife) wrote :

Hi, last little thing. We now increase the debug msg to try to understand the problem:

besides giving
"cat /proc/fs/cifs/DebugData"

giving:
"cat /proc/fs/cifs/Stats"

echo 4 > /proc/fs/cifs/cifsFYI (this command enabled additional timing information be logged in dmesg file)

and then, after copying a file, try to do dsmeg | grep cifs and post result. Thanks.

Revision history for this message
Bohdan (bohdan-linda) wrote :

$ cat /proc/fs/cifs/DebugData
Display Internal CIFS Data Structures for Debugging
---------------------------------------------------
CIFS Version 1.71
Features: dfs lanman posix spnego xattr
Active VFS Requests: 0
Servers:
1) Name: 192.168.11.1 Domain: LABS Uses: 2 OS: Unix
        NOS: Samba 3.5.6 Capability: 0x80f3fd
        SMB session status: 1 TCP status: 1
        Local Users To Server: 1 SecMode: 0x2 Req On Wire: 0
        Shares:
        1) \\server\share Mounts: 1 Type: NTFS DevInfo: 0x0 Attributes: 0x1002f
PathComponentMax: 255 Status: 0x1 type: 0

        2) \\server\temp Mounts: 1 Type: NTFS DevInfo: 0x0 Attributes: 0x1002f
PathComponentMax: 255 Status: 0x1 type: 0

        MIDs:

$ dd if=/server/share/test1 of=test1
200000+0 records in
200000+0 records out
102400000 bytes (102 MB) copied, 26.344 s, 3.9 MB/s
$ cat /proc/fs/cifs/cifsFYI
4
$ dmesg | grep cifs
$
$ cat /proc/fs/cifs/Stats
cat: /proc/fs/cifs/Stats: No such file or directory

Revision history for this message
Jean- (jean-helou) wrote :
Download full text (9.3 KiB)

Here is the result of my extensive tests

For all tests:
192.168.1.10 is a synology NAS with 1Gb ethernet - smbd 3.2.8 (DSM 3.1-1613)
192.168.1.9 is a dell xps 1330 with 100Mb ethernet - upto date 11.04 distro (smbclient says 3.5.8)
192.168.1.11 is an asus U36jc with 1Gb ethernet up to date win 7

they are linked to a netgear GS 108 SWITCH rated for Gb ethernet

No other equipment is connected

The NAS is the first SMB server

Download test
iperf -c 192.168.1.10 -f M -p 2222
------------------------------------------------------------
Client connecting to 192.168.1.10, TCP port 2222
TCP window size: 0.02 MByte (default)
------------------------------------------------------------
[ 3] local 192.168.1.9 port 55114 connected with 192.168.1.10 port 2222
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-10.0 sec 99.5 MBytes 9.94 MBytes/sec

Upload test

DiskStation> iperf -c 192.168.1.9 -f M
------------------------------------------------------------
Client connecting to 192.168.1.9, TCP port 5001
TCP window size: 0.02 MByte (default)
------------------------------------------------------------
[ 5] local 192.168.1.10 port 2142 connected with 192.168.1.9 port 5001
[ ID] Interval Transfer Bandwidth
[ 5] 0.0-10.0 sec 112 MBytes 11.2 MBytes/sec

9.94MBytes is my baseline and sounds right for 100Mb ethernet

With no modification on either the server or the client here is what I get with smbclient.

SMBCLIENT Download test

smbclient //192.168.1.10/public
Enter jean's password:
Domain=[JEAN] OS=[Unix] Server=[Samba 3.2.8]
smb: \> get ubuntu-11.04-server-i386.iso
getting file \ubuntu-11.04-server-i386.iso of size 686493696 as ubuntu-11.04-server-i386.iso (11126,9 KiloBytes/sec) (average 11126,9 KiloBytes/sec)
smb: \>

SMBCLIENT Upload test

jean@xps:~/opt/mnt$ smbclient //192.168.1.10/public
Enter jean's password:
Domain=[JEAN] OS=[Unix] Server=[Samba 3.2.8]
smb: \> put ubuntu-11.04-server-i386.iso.up
putting file ubuntu-11.04-server-i386.iso.up as \ubuntu-11.04-server-i386.iso.up (9881,6 kb/s) (average 9881,6 kb/s)
smb: \>

smb client yields speeds which seem fine for a 100 Mb connection

fstab

//192.168.1.10/public /mnt/test cifs credentials=/etc/credentials.del,rw,_netdev,uid=jean,gid=users 0 0

CIFS Download test

dd if=ubuntu-11.04-server-i386.iso of=/tmp/ubuntu-11.04-server-i386.iso.up
1340808+0 enregistrements lus
1340808+0 enregistrements écrits
686493696 octets (686 MB) copiés, 101,899 s, 6,7 MB/s

CIFS Upload test

jean@xps:/mnt/test$ dd if=/tmp/ubuntu-11.04-server-i386.iso.up of=ubuntu-11.04-server-i386.iso.up
1340808+0 enregistrements lus
1340808+0 enregistrements écrits
686493696 octets (686 MB) copiés, 820,385 s, 837 kB/s

CIFS without modifications downloads fine but uploads ver very slowly we lost one order of magnitude. This is the same as what I observed through nautilus using only graphical interfaces and no command line

Following ubuntu 810606 bug first debug instruction :
create
 /etc/modprobe.d/cifs.conf
with content
 options cifs CIFSMaxBufferSize=130048

CIFS 2 Download test
jean@xps:/mnt/test$ dd if=ubuntu-11.04-server-i386.iso of=/tmp/ubuntu-11.04-server-i386.iso.up
1340808+0 enregist...

Read more...

Revision history for this message
Massimo Forti (slackwarelife) wrote :

Hello everyone. I must admit that what is strange is that each of you has a different result about the speed tests:

@ Boletis Georgios said to reach about 60MB out of Nautilus and 8-10MB in Nautilus;
@ Bohdan says that no more than 4-5MB;
@ zzarko having reached the 7-8MB;
@ Ubuntu Uefi User said to go to 16MB, but with the option forcedirectio;

The results are very different, we have to understand what is not working, not only at cifs level, but also in other Ubuntu configuration.

Revision history for this message
Massimo Forti (slackwarelife) wrote :

I can ask to you to do the command cat /proc/fs/cifs/DebugData (except @Bohdan who has just post the result). This command is necessary to understand the version of cifs module in use in your ubuntu box. Thanks.

Revision history for this message
Jean- (jean-helou) wrote :

there you go with both mounts on

jean@xps:/mnt$ cat /proc/fs/cifs/DebugData
Display Internal CIFS Data Structures for Debugging
---------------------------------------------------
CIFS Version 1.71
Features: dfs lanman posix spnego xattr
Active VFS Requests: 0
Servers:
1) Name: 192.168.1.10 Domain: JEAN Uses: 1 OS: Unix
 NOS: Samba 3.2.8 Capability: 0x80f3fd
 SMB session status: 1 TCP status: 1
 Local Users To Server: 1 SecMode: 0x3 Req On Wire: 0
 Shares:
 1) \\192.168.1.10\public Mounts: 1 Type: NTFS DevInfo: 0x0 Attributes: 0x50027
PathComponentMax: 255 Status: 0x1 type: 0

 MIDs:

2) Name: 192.168.1.11 Domain: JEAN Uses: 1 OS: Windows 7 Professional 7601 Service Pack 1
 NOS: Windows 7 Professional 6.1 Capability: 0x1e3fc
 SMB session status: 1 TCP status: 1
 Local Users To Server: 1 SecMode: 0x3 Req On Wire: 0
 Shares:
 1) \\192.168.1.11\temp Mounts: 1 Type: NTFS DevInfo: 0x20 Attributes: 0xc700ff
PathComponentMax: 255 Status: 0x1 type: DISK

 MIDs:

Revision history for this message
Bohdan (bohdan-linda) wrote :

One more snippet, why I think it is in CIFS:
$ smbclient //server/share
Enter bohdan's password:
Domain=[LABS] OS=[Unix] Server=[Samba 3.5.6]
Server not using user level security and no password supplied.
smb: \> get test
getting file \test of size 1024000000 as test (24693.8 KiloBytes/sec) (average 24693.8 KiloBytes/sec)
smb: \>

Revision history for this message
Massimo Forti (slackwarelife) wrote :

Hello, sorry for the delay. I'm trying to do the test on my server trying to replicate your situation. I think in the coming days to give you some more info on which to deal. Thanks to all

Revision history for this message
Massimo Forti (slackwarelife) wrote :

Hi at all, to continue a test i must know your exact configuration. Can you please post the result of:

:\> modinfo cifs

So i try to replicate your situation. Many, many thanks.

Revision history for this message
Jean- (jean-helou) wrote :

jean@xps:~$ modinfo cifs
filename: /lib/modules/2.6.38-10-generic/kernel/fs/cifs/cifs.ko
version: 1.71
description: VFS to access servers complying with the SNIA CIFS Specification e.g. Samba and Windows
license: GPL
author: Steve French <email address hidden>
srcversion: CF4A226E475E7B9C049C66F
depends:
vermagic: 2.6.38-10-generic SMP mod_unload modversions
parm: CIFSMaxBufSize:Network buffer size (not including header). Default: 16384 Range: 8192 to 130048 (int)
parm: cifs_min_rcv:Network buffers in pool. Default: 4 Range: 1 to 64 (int)
parm: cifs_min_small:Small network buffers in pool. Default: 30 Range: 2 to 256 (int)
parm: cifs_max_pending:Simultaneous requests to server. Default: 50 Range: 2 to 256 (int)
parm: echo_retries:Number of echo attempts before giving up and reconnecting server. Default: 5. 0 means never reconnect. (ushort)

Revision history for this message
zzarko (zzarko-gmail) wrote :

Here is mine:
filename: /lib/modules/2.6.38-10-generic/kernel/fs/cifs/cifs.ko
version: 1.71
description: VFS to access servers complying with the SNIA CIFS Specification e.g. Samba and Windows
license: GPL
author: Steve French <email address hidden>
srcversion: CF4A226E475E7B9C049C66F
depends:
vermagic: 2.6.38-10-generic SMP mod_unload modversions 686
parm: CIFSMaxBufSize:Network buffer size (not including header). Default: 16384 Range: 8192 to 130048 (int)
parm: cifs_min_rcv:Network buffers in pool. Default: 4 Range: 1 to 64 (int)
parm: cifs_min_small:Small network buffers in pool. Default: 30 Range: 2 to 256 (int)
parm: cifs_max_pending:Simultaneous requests to server. Default: 50 Range: 2 to 256 (int)
parm: echo_retries:Number of echo attempts before giving up and reconnecting server. Default: 5. 0 means never reconnect. (ushort)

Revision history for this message
Ubuntu Uefi User (ubuntuuefiuser) wrote :

I got to 20MB/sec by reconfiguring Windows 7 to disable autotuning. forcedirectio or not now has no speed effect anymore. Tested on a gigabit network.

filename: /lib/modules/2.6.38-10-server/kernel/fs/cifs/cifs.ko
version: 1.71
description: VFS to access servers complying with the SNIA CIFS Specification e.g. Samba and Windows
license: GPL
author: Steve French <email address hidden>
srcversion: CF4A226E475E7B9C049C66F
depends:
vermagic: 2.6.38-10-server SMP mod_unload modversions
parm: CIFSMaxBufSize:Network buffer size (not including header). Defau lt: 16384 Range: 8192 to 130048 (int)
parm: cifs_min_rcv:Network buffers in pool. Default: 4 Range: 1 to 64 (int)
parm: cifs_min_small:Small network buffers in pool. Default: 30 Range: 2 to 256 (int)
parm: cifs_max_pending:Simultaneous requests to server. Default: 50 Ra nge: 2 to 256 (int)
parm: echo_retries:Number of echo attempts before giving up and reconn ecting server. Default: 5. 0 means never reconnect. (ushort)

Revision history for this message
Ubuntu Uefi User (ubuntuuefiuser) wrote :

Jean- (jean-helou) and I use the same model network switch (Netgear GS108 Gigabit switch), could it be related to that? Do the other users also have the same switch?

Revision history for this message
Bohdan (bohdan-linda) wrote :

# modinfo cifs
filename: /lib/modules/2.6.38-8-generic/kernel/fs/cifs/cifs.ko
version: 1.71
description: VFS to access servers complying with the SNIA CIFS Specification e.g. Samba and Windows
license: GPL
author: Steve French <email address hidden>
srcversion: 9D5FB508C3F48DB44F28CAF
depends:
vermagic: 2.6.38-8-generic SMP mod_unload modversions
parm: CIFSMaxBufSize:Network buffer size (not including header). Default: 16384 Range: 8192 to 130048 (int)
parm: cifs_min_rcv:Network buffers in pool. Default: 4 Range: 1 to 64 (int)
parm: cifs_min_small:Small network buffers in pool. Default: 30 Range: 2 to 256 (int)
parm: cifs_max_pending:Simultaneous requests to server. Default: 50 Range: 2 to 256 (int)
parm: echo_retries:Number of echo attempts before giving up and reconnecting server. Default: 5. 0 means never reconnect. (ushort)

Revision history for this message
Bohdan (bohdan-linda) wrote :

Boletis: the package was deleted cannot check, but contains many fixes for cifs module

Revision history for this message
Massimo Forti (slackwarelife) wrote :

Hi at all. Unfortunately I have not yet completely finished configuring my Ubuntu home server to replicate your setup and make accurate tests.
Conducting research in google, though, I have found many differences about the cifs module works. It seems that not all people which use cifs module have slow transfer.
The same kernel documentation mentions how to implement the speed of the cifs module playing with the parameters I have described in my post of 07/15/2001. I ask you to have a little patience so that I end my local network configuration for testing about the bug. Thanks at all for your posts and testing.

Revision history for this message
zzarko (zzarko-gmail) wrote :

Something happend after one of updates in the last few days, as my speed dropped from ~7.5MB/s to ~3.5MB/s using cifs.

Revision history for this message
Bohdan (bohdan-linda) wrote :

Definitely rotten cifs module in distribution's kernel. Yesterday upgraded to backport. Went form 2.6.38-8 to 2.6.38-10-generic as Boletis. Speed dramatically improved:

734793728 bytes (735 MB) copied, 38.0731 s, 19.3 MB/s

Revision history for this message
Massimo Forti (slackwarelife) wrote :

Hi all, my test: copy file from Ubuntu to WinXP and viceversa. File is Ubuntu ISO 733 mb. My network is a wi-fi with about 20 MB transmission speed from PC to router and 100 MB transmission from router to Ubuntu Server. This is my modinfo cifs:

~$ modinfo cifs
filename: /lib/modules/2.6.38-11-generic/kernel/fs/cifs/cifs.ko
version: 1.71
description: VFS to access servers complying with the SNIA CIFS Specification e.g. Samba and Windows
license: GPL
author: Steve French <email address hidden>
srcversion: 25ACAA091644A0398406E12
depends:
vermagic: 2.6.38-11-generic SMP mod_unload modversions 686
parm: CIFSMaxBufSize:Network buffer size (not including header). Default: 16384 Range: 8192 to 130048 (int)
parm: cifs_min_rcv:Network buffers in pool. Default: 4 Range: 1 to 64 (int)
parm: cifs_min_small:Small network buffers in pool. Default: 30 Range: 2 to 256 (int)
parm: cifs_max_pending:Simultaneous requests to server. Default: 50 Range: 2 to 256 (int)
parm: echo_retries:Number of echo attempts before giving up and reconnecting server. Default: 5. 0 means never reconnect. (ushort)

I'm usng Ubuntu 11.04 server with latest Kernel 2.6.38-11. This is my speed transfer:

sudo dd if=/media/DISCO1/iso/ubuntu-9.04-desktop-i386.iso of=/media/Prova
1431464+0 record dentro
1431464+0 record fuori
732909568 byte (733 MB) copiati, 36,3641 s, 20,2 MB/s

where /media/DISCO1/iso... is my raid HD and /media/Prova is a WinXP directory. My speed transmission is in line with my network speed (24 MB/s). I have removed all configuration parameters to do these test. I don't think this is a bug, but it is a consequence as the cifs.ko module is compiled. Let me know your comments, many thanks.

Revision history for this message
Bohdan (bohdan-linda) wrote :

cannot confirm this. On my system with modifiers at cifs.ko and mount options:
733964288 bytes (734 MB) copied, 40.3041 s, 18.2 MB/s

without cifs.ko (CIFSMaxBufSize)
734793728 bytes (735 MB) copied, 57.1226 s, 12.9 MB/s

without mount options(rsize=32768,wsize=32768):
733282304 bytes (733 MB) copied, 49.7562 s, 14.7 MB/s

Revision history for this message
Massimo Forti (slackwarelife) wrote :

Hi Bohdan

I understand what you mean and I think I don't explained myself well. This is holding me back from confirming this bug is that, how could you see yourself, playing with the parameters set by the programmer, for the module cifs.ko, you can increase or decrease the speed of transfer. Just because this thing is desired, we can not talk about bugs, but about configuration parameters to be set according to our needs.
A bug is that when in any case the behavior of the software is manageable and the error comes back every time. In our previous post we have seen that even changing the kernel version changed the reference transfer rate, improved with the newly compiled versions.
All this makes me think that the module is operated and well modeled with the parameters available to us, what makes it slow is how it is compiled, but this is not a bug in the software, but rather a defect that could be due to the configuration parameters passed to the kernel configuration phase.
Let me know, thanks.

Revision history for this message
Bohdan (bohdan-linda) wrote :

Hello Massimo,

I think we still speak about different things. My solution has two halves:

1. Adding required configuration parameters, which improve performance. In this step I agree with you completely
2. Upgrading from 2.6.38-8 (got it from main) to 2.6.38-10 (got it from backports) which brought many patches to cifs (http://www.ubuntuupdates.org/packages/show/314403) and these obviously are not configuration matters

Taking only one of these steps will not permit me benefiting of the bandwidth I have available. For acceptable solution I have to apply a fix, therefore, from my point of view, its a bug.

If 38-10 is now valid update to the kernel in main natty, we can speak solely about configuration defect.

Kind regards,
Bohdan

Revision history for this message
Massimo Forti (slackwarelife) wrote :

Hello Bohdan,

You have right about the patch apply to kenrel 2.6.38-10:

  * cifs: change bleft in decode_unicode_ssetup back to signed type
    - LP: #788691
  * cifs: check for bytes_remaining going to zero in CIFS_SessSetup
    - LP: #788691
  * cifs: sanitize length checking in coalesce_t2 (try #3)
    - LP: #788691
  * cifs: refactor mid finding loop in cifs_demultiplex_thread
    - LP: #788691
  * cifs: handle errors from coalesce_t2
    - LP: #788691
  * CIFS: Fix memory over bound bug in cifs_parse_mount_options
    - LP: #788691
  * cifs: add fallback in is_path_accessible for old servers
    - LP: #788691
  * cifs: clean up various nits in unicode routines (try #2)
    - LP: #788691
  * cifs: fix cifsConvertToUCS() for the mapchars case
    - LP: #788691

but all these patchs are not regarding the speed of data transfer, however now the stable version of Ubuntu Kernel is : https://launchpad.net/ubuntu/natty/+package/linux-image-2.6.38-11-generic and this version contains the updates above mentioned, with new kernel patches. If you agree, I will marks this bug with "fix released" status whit this comment: "by the new kernel version".
Let me know, many thanks at all for your support.

Revision history for this message
Bohdan (bohdan-linda) wrote :

Massimo,

38-11 is still proposed, I am syncing with backports and there is 38-10. How it will be dealt with required configuration with respect to the ubuntu istallation?

Regards,
Bohdan

Revision history for this message
Massimo Forti (slackwarelife) wrote :

Hello Bohdan
excuse the delay, but I was on vacation. The 2.6.38.11 kernel should be available in the Ubuntu repositories 11.04. However, the cifs seems to work much better now. If, after some time, there will be no further problems, I shall put the status of the following bug as invalid.

Revision history for this message
Ubuntu Uefi User (ubuntuuefiuser) wrote :

I'm trying it in Ubuntu server 11.10 beta kernel 3.0.0-10.

Windows 7 --> Ubuntu server via mount cifs =< 20 % link utilization in windows 7 task manager
Ubuntu server <-- Windows 7 via samba in explorer => 30 % link utilization in windows 7 task manager

General further slowness can be deduced to btrFS now in my case. Although general disk troughput is very high +100MB/s. Cifs in combination with btrFS is a big performance hit on Atom CPU. Performance in cifs is now way better than over sftp.

Revision history for this message
Ubuntu Uefi User (ubuntuuefiuser) wrote :

Tried on same system using windows 8 <-- Windows 7 resulting in 40% link utilization (50MB/s)

Windows 8 thus has double the cifs speed ubuntu mount.cifs has at the moment.

Revision history for this message
Launchpad Janitor (janitor) wrote :

[Expired for cifs-utils (Ubuntu) because there has been no activity for 60 days.]

Changed in cifs-utils (Ubuntu):
status: Incomplete → Expired
Revision history for this message
Josu Lazkano (josu-lazkano) wrote :

Same problem here on Oneiric.

The upload speed is good (11.X MB/sec), but the download is pper (5.X MB/sec) on a 100mbps LAN.

I try all parameters of cifs module.

Regards.

Revision history for this message
Josh Burghandy (kid1000002000) wrote :
Revision history for this message
Chris (chrisccnpspam) wrote :

I'm running into this issue myself. I just loaded up a new application on my linux system that is being bottlenecked by the smbfs I/O issue. I've been reading through these threads (among others) I've done as many tweaks as I can. As one person above tested using smbclient vs smbfs/cifs you can see there is quite a difference..

I run a linux VM (SMB Client) and a Windows 2008 R2 server (SMB server) on a VMware box. Using smbclient I am able to get 500+MB/sec of throughput, using the smbmount or fstab mount I can only get around 120MB/sec or so. Granted this is GigE speeds, but my VM's are linked together via 10Gig Virtual NICs.

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Duplicates of this bug

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.