加入收藏 | 设为首页 | 会员中心 | 我要投稿 李大同 (https://www.lidatong.com.cn/)- 科技、建站、经验、云计算、5G、大数据,站长网!
当前位置: 首页 > 综合聚焦 > 服务器 > Linux > 正文

Linux上的用户和Linux系统的打开文件数量是多少?

发布时间:2020-12-13 22:55:35 所属栏目:Linux 来源:网络整理
导读:对不起,这个问题有几层,但都涉及打开文件的数量. 我正在开发的应用程序日志中收到“太多打开文件”消息.有人建议我: 查找当前正在使用的打开文件数,系统范围和每个用户 查找系统和用户打开文件的限制. 我运行ulimit -n并返回1024.我还查看了/etc/limits.con
对不起,这个问题有几层,但都涉及打开文件的数量.

我正在开发的应用程序日志中收到“太多打开文件”消息.有人建议我:

>查找当前正在使用的打开文件数,系统范围和每个用户
>查找系统和用户打开文件的限制.

我运行ulimit -n并返回1024.我还查看了/etc/limits.conf并且该文件中没有任何特殊内容. /etc/sysctl.conf也未修改.我将列出以下文件的内容.我也跑了lsof | wc -l,返回5000行(如果我正确使用它).

所以,我的主要问题是:

>如何找到每个用户允许的打开文件数?软限制是在/etc/limits.conf中找到/定义的nofile设置吗?什么是默认值,因为我没有触及/etc/limits.conf?
>如何在系统范围内找到允许的打开文件数?它是limits.conf中的硬限制吗?如果不修改limits.conf,默认值是多少?
> ulimit为打开文件返回的数字是多少?它说1024但是当我运行lsof并计算线数时,它超过5000,所以有些东西不是我的点击.是否应该运行其他cmds或查看文件以获得这些限制?在此先感谢您的帮助.

limits.conf的内容

# /etc/security/limits.conf
#
#Each line describes a limit for a user in the form:
#
#<domain>        <type>  <item>  <value>
#
#Where:
#<domain> can be:
#        - an user name
#        - a group name,with @group syntax
#        - the wildcard *,for default entry
#        - the wildcard %,can be also used with %group syntax,#                 for maxlogin limit
#
#<type> can have the two values:
#        - "soft" for enforcing the soft limits
#        - "hard" for enforcing hard limits
#
#<item> can be one of the following:
#        - core - limits the core file size (KB)
#        - data - max data size (KB)
#        - fsize - maximum filesize (KB)
#        - memlock - max locked-in-memory address space (KB)
#        - nofile - max number of open files
#        - rss - max resident set size (KB)
#        - stack - max stack size (KB)
#        - cpu - max CPU time (MIN)
#        - nproc - max number of processes
#        - as - address space limit (KB)
#        - maxlogins - max number of logins for this user
#        - maxsyslogins - max number of logins on the system
#        - priority - the priority to run user process with
#        - locks - max number of file locks the user can hold
#        - sigpending - max number of pending signals
#        - msgqueue - max memory used by POSIX message queues (bytes)
#        - nice - max nice priority allowed to raise to values: [-20,19]
#        - rtprio - max realtime priority
#
#<domain>      <type>  <item>         <value>
#

#*               soft    core            0
#*               hard    rss             10000
#@student        hard    nproc           20
#@faculty        soft    nproc           20
#@faculty        hard    nproc           50
#ftp             hard    nproc           0
#@student        -       maxlogins       4

# End of file

sysctl.conf的内容

# Controls IP packet forwarding
net.ipv4.ip_forward = 0

# Controls source route verification
net.ipv4.conf.default.rp_filter = 1

# Do not accept source routing
net.ipv4.conf.default.accept_source_route = 0

# Controls the System Request debugging functionality of the kernel
kernel.sysrq = 0

# Controls whether core dumps will append the PID to the core filename
# Useful for debugging multi-threaded applications
kernel.core_uses_pid = 1

# Controls the use of TCP syncookies
net.ipv4.tcp_syncookies = 1

# Controls the maximum size of a message,in bytes
kernel.msgmnb = 65536

# Controls the default maxmimum size of a mesage queue
kernel.msgmax = 65536

# Controls the maximum shared segment size,in bytes
kernel.shmmax = 68719476736

# Controls the maximum number of shared memory segments,in pages
kernel.shmall = 4294967296

# the interval between the last data packet sent and the first keepalive probe
net.ipv4.tcp_keepalive_time = 600

# the interval between subsequential keepalive probes
net.ipv4.tcp_keepalive_intvl = 60

# the interval between the last data packet sent and the first keepalive probe
net.ipv4.tcp_keepalive_time = 600

# the interval between subsequential keepalive probes
net.ipv4.tcp_keepalive_intvl = 60

# the number of unacknowledged probes to send before considering the connection dead and notifying the application layer
net.ipv4.tcp_keepalive_probes = 10

# the number of unacknowledged probes to send before considering the connection dead and notifying the application layer
net.ipv4.tcp_keepalive_probes = 10

# try as hard as possible not to swap,as safely as possible
vm.swappiness = 1
fs.aio-max-nr = 1048576
#fs.file-max = 4096

解决方法

没有每用户文件限制.您需要注意的是系统范围和每个进程.每个进程的文件数限制乘以每个用户的进程限制理论上可以提供每个用户的文件限制,但是使用正常值时,产品将会非常大,以至于无限制.

此外,lsof的最初目的是LiSt Open Files,但它现在已经增长并列出了其他东西,比如cwd和mmap region,这是它输出比你预期的更多行的另一个原因.

错误消息“Too many open files”与errno值EMFILE相关联,每个进程限制,在您的情况下似乎是1024.如果您可以找到正确的选项来限制lsof只显示单个实际文件描述符过程中,你可能会发现它们有1024个,或者非常接近的东西.

这些天很少需要手动调整系统范围的文件描述符限制,因为它的默认值与内存成比例.如果需要,可以在/ proc / sys / fs / file-max中找到它,并在/ proc / sys / fs / file-nr中找到有关当前用法的信息.您的sysctl文件的file-max值为4096,但它已被注释掉,因此您不应该认真对待它.

如果您设法达到系统范围的限制,您将获得错误ENFILE,这将转换为错误消息“文件表溢出”或“系统中打开的文件过多”.

(编辑:李大同)

【声明】本站内容均来自网络,其相关言论仅代表作者个人观点,不代表本站立场。若无意侵犯到您的权利,请及时与联系站长删除相关内容!

    推荐文章
      热点阅读