Home > Erlang探索, 网络编程 > gen_tcp接受链接时enfile的问题分析及解决


December 5th, 2011

原创文章,转载请注明: 转载自系统技术非业余研究

本文链接地址: gen_tcp接受链接时enfile的问题分析及解决

最近我们为了安全方面的原因,在RDS服务器上做了个代理程序把普通的MYSQL TCP连接变成了SSL链接,在测试的时候,皓庭同学发现Tsung发起了几千个TCP链接后Erlang做的SSL PROXY老是报告gen_tcp:accept返回{error, enfile}错误。针对这个问题,我展开了如下的调查:

首先man accept手册,确定enfile的原因,因为gen_tcp肯定是调用accept系统调用的:

EMFILE The per-process limit of open file descriptors has been reached.
ENFILE The system limit on the total number of open files has been reached.


$ uname -r
$ cat /proc/sys/fs/file-nr 
2040    0       2417338
$ ulimit -n

由于我们微调了系统的文件句柄,具体参考这里 老生常谈: ulimit问题及其影响, 这些参数看起来非常的正常。

static int sock_alloc_fd(struct file **filep)
        int fd;

        fd = get_unused_fd();
        if (likely(fd >= 0)) {
                struct file *file = get_empty_filp();

                *filep = file;
                if (unlikely(!file)) {
                        return -ENFILE;
        } else
                *filep = NULL;
        return fd;

static int __sock_create(int family, int type, int protocol, struct socket **res, int kern)
 *      Allocate the socket and allow the family to set things up. if                                                                                                   
 *      the protocol is 0, the family is instructed to select an appropriate                                                                                            
 *      default.                                                                                                                                                        

        if (!(sock = sock_alloc())) {
                if (net_ratelimit())
                        printk(KERN_WARNING "socket: no more sockets\n");
                err = -ENFILE;          /* Not exactly a match, but its the                                                                                             
                                           closest posix thing */
                goto out;

asmlinkage long sys_accept(int fd, struct sockaddr __user *upeer_sockaddr, int __user *upeer_addrlen)
        struct socket *sock, *newsock;
        struct file *newfile;
        int err, len, newfd, fput_needed;
        char address[MAX_SOCK_ADDR];

        sock = sockfd_lookup_light(fd, &err, &fput_needed);
        if (!sock)
                goto out;

        err = -ENFILE;                 
        if (!(newsock = sock_alloc()))
                goto out_put;


$ cat enfile.stp
probe kernel.function("kmem_cache_alloc").return,
  if($return == 0) { print_backtrace();exit();}
probe kernel.function("sock_alloc_fd").return {
  if($return < 0) { print_backtrace(); exit();}
probe syscall.accept.return {
  if($return == -23) {print_backtrace(); exit();}
probe begin {
$ sudo stap enfile.stp

gen_tcp:accept报告{error, enfile}的时候,也没看到stap报异常,基本上可以排除操作系统的原因了,那么我们现在回到gen_tcp的实现来看。
gen_tcp是个port, 具体实现在erts/emulator/drivers/common/inet_drv.c,我们来看下有ENFILE的地方:

/* Copy a descriptor, by creating a new port with same settings                                                                                                         
 * as the descriptor desc.                                                                                                                                              
 * return NULL on error (ENFILE no ports avail)                                                                                                                         
static tcp_descriptor* tcp_inet_copy(tcp_descriptor* desc,SOCKET s,
                                     ErlDrvTermData owner, int* err)
    /* The new port will be linked and connected to the original caller */
    port = driver_create_port(port, owner, "tcp_inet", (ErlDrvData) copy_desc);
    if ((long)port == -1) {
        *err = ENFILE;
        return NULL;

当 driver_create_port 失败的时候,gen_tcp返回ENFILE,看起来这次找对地方了。我们继续看下 driver_create_port的实现:

 * Driver function to create new instances of a driver                                                                                                                  
 * Historical reason: to be used with inet_drv for creating                                                                                                             
 * accept sockets inorder to avoid a global table.                                                                                                                      
driver_create_port(ErlDrvPort creator_port_ix, /* Creating port */
                   ErlDrvTermData pid,    /* Owner/Caller */
                   char* name,            /* Driver name */
                   ErlDrvData drv_data)   /* Driver data */
    rp = erts_pid2proc(NULL, 0, pid, ERTS_PROC_LOCK_LINK);
    if (!rp) {
        return (ErlDrvTermData) -1;   /* pid does not exist */
    if ((port_num = get_free_port()) < 0) {
        errno = ENFILE;
        erts_smp_proc_unlock(rp, ERTS_PROC_LOCK_LINK);
        return (ErlDrvTermData) -1;

    port_id = make_internal_port(port_num);
    port = &erts_port[port_num & erts_port_tab_index_mask];

get_free_port()<0的时候就返回ENFILE错误。 那我们看下port总的数目是如何设定的: [c] /* initialize the port array */ void init_io(void) { ... if (erts_sys_getenv("ERL_MAX_PORTS", maxports, &maxportssize) == 0) erts_max_ports = atoi(maxports); else erts_max_ports = sys_max_files(); if (erts_max_ports > ERTS_MAX_PORTS) erts_max_ports = ERTS_MAX_PORTS; if (erts_max_ports < 1024) erts_max_ports = 1024; if (erts_use_r9_pids_ports) { ports_bits = ERTS_R9_PORTS_BITS; if (erts_max_ports > ERTS_MAX_R9_PORTS) erts_max_ports = ERTS_MAX_R9_PORTS; } port_extra_shift = erts_fit_in_bits(erts_max_ports - 1); port_num_mask = (1 << ports_bits) - 1; ... } [/c] 第一步:如果设定了ERL_MAX_PORTS环境变量,那么就按照用户设定的,否则就和ulimit -n 一样大。 第二部:这个值不能大于ERTS_MAX_PORTS或者小于1024. 好了,我们基本上明白这个问题的原因了: erts_max_ports设定的太小. 我们再来验证下: gdb attach到我们的进程下 (gdb) p erts_max_ports $1 = 4096 原来是port设置有问题,导致上面的现象,看起来很绕的,Erlang的设计者认为PORT资源(相当于操作系统的IO资源)短缺如同操作系统的文件句柄短缺一样,达到system_limit就应该出ENFILE错误! 解决方案是: erl -env ERTS_MAX_PORTS NNNN 搞大点就好。 顺便再来强调下Erlang服务器几个关键的参数,来源:http://www.ejabberd.im/tuning,对服务器的设置很有帮助。

This page lists several tricks to tune your ejabberd and Erlang installation for maximum performance gains. Remark that some of the described options are experimental.

Erlang Ports Limit: ERL_MAX_PORTS
Erlang consumes one port for every connection, either from a client or from another Jabber server. The option ERL_MAX_PORTS limits the number of concurrent connections and can be specified when starting ejabberd:

erl -s ejabberd -env ERL_MAX_PORTS 5000 …

Maximum Number of Erlang Processes: +P
Erlang consumes a lot of lightweight processes. If there is a lot of activity on ejabberd so that the maximum number of proccesses is reached, people will experiment greater latency times. As these processes are implemented in Erlang, and therefore not related to the operating system processes, you do not have to worry about allowing a huge number of them.

erl -s ejabberd +P 250000 …

ERL_FULLSWEEP_AFTER: Maximum number of collections before a forced fullsweep
The ERL_FULLSWEEP_AFTER option shrinks the size of the Erlang process after RAM intensive events. Note that this option may downgrade performance. Hence this option is only interesting on machines that host other services (webserver, mail) on which ejabberd does not receive constant load.

erl -s ejabberd -env ERL_FULLSWEEP_AFTER 0 …

Kernel Polling: +K true

The kernel polling option requires that you have support for it in your kernel. By default, Erlang currently supports kernel polling under FreeBSD, Mac OS X, and Solaris. If you use Linux, check this newspost. Additionaly, you need to enable this feature while compiling Erlang.

From Erlang documentation -> Basic Applications -> erts -> erl -> System Flags:

+K true|false

Enables or disables the kernel poll functionality if the emulator has kernel poll support. By default the kernel poll; functionality is disabled. If the emulator doesn’t have kernel poll support and the +K flag is passed to the emulator, a warning is issued at startup.

If you meet all requirements, you can enable it in this way:

erl -s ejabberd +K true …

Mnesia Tables to Disk
By default, ejabberd uses Mnesia as its database. In Mnesia you can configure each table in the database to be stored on RAM, on RAM and on disk, or only on disk. You can configure this in the web interface: Nodes -> ‘mynode’ -> DB Management. Modification of this option will consume some memory and CPU time.
Number of Concurrent ETS and Mnesia Tables: ERL_MAX_ETS_TABLES
The number of concurrent ETS and Mnesia tables is limited. When the limit is reached, errors will appear in the logs:

** Too many db tables **

You can safely increase this limit when starting ejabberd. It impacts memory consumption but the difference will be quite small.

erl -s ejabberd -env ERL_MAX_ETS_TABLES 20000 …



Post Footer automatically generated by wp-posturl plugin for wordpress.

  1. April 4th, 2012 at 19:58 | #1


    OTP-9990 Fix returned error from gen_tcp:accept/1,2 when running out
    of ports

    The {error, enfile} return value is badly misleading and
    confusing for this case, since the Posix ENFILE errno value
    has a well-defined meaning that has nothing to do with Erlang
    ports. The fix changes the return value to {error,
    system_limit}, which is consistent with e.g. various file(3)
    functions. inet:format_error/1 has also been updated to
    support system_limit in the same manner as
    file:format_error/1. (Thanks to Per Hedeland)

Comments are closed.