People being disconnected with: "An existing connection was forcibly closed by the remote host"

Discussion in 'Systems Administration' started by Tundral, Apr 23, 2018.

  1. So I moved to a new host, a dedicated server yesterday and now for some reason people are having difficulties staying connected to the server.

    What happens is they connect and everything looks to be normal on the server side and suddenly after being 'connected' for about 10sec they get disconnected with just the defaut "Player left the game" message in the console and nothing more. This does not affect all players and the ones who can keep a connection aren't having any issues.

    One of the people having difficulties is telling me that she's getting the
    Code (Text):
    An existing connection was forcibly closed by the remote host
    Error.

    It's a fresh dedicated server and I don't have any proxy/VPN blocker plugins or setting on. I copied the server folder from the old host completely apart from the spigot.yml if that might be source of the problems.

    Any suggestions? Help is greatly appreciated!
     
  2. Is the port properly open on your machine? Is there nothing else in the console when a player's connection is closed? Also, try having her or another player having the same issue send you them their client log when they get disconnected so I can see it here. Those matters are just off the top of my head.
     
  3. The ports are properly open since this only affects a handful of people and the console looks just like the player left normally, just giving the the "X player left the game" message.

    I'll ask for the log next time someone gets kicked like that!
     
  4. Sounds good. I think the log would help then as it seems the issue's client-side.
     
  5. Well the thing is that it's been happening to multiple people seemingly randomly (though distance from the server seems to correlate a bit) so I'm inclined to think it's something with the routing of the connections of the people, which I would have a hard time affect.
     
  6. @Tundral

    This seems unlikely to be a client side issue due to affecting multiple people after a host move. Contact your server provider.
     
  7. I contacted them but unfortunately I myself have no such problems with the sever and the server host requires an MTR analysis of a troublesome connection both ways to try and diagnose the issue if it's something with the host, though reading the paperMC thing I'm starting to wonder if it's something with the client or the player's own networks.

    Could you give me a rundown of does this affect non-paper Spigot, what the issue is and is there a fix coming or not? I'm sorry but it's quite long and has a lot of info that's not too relevant to me. The issue seems quite similar to mine though. I've used PaperSpigot but after using it and having some problems with some plugins I swapped stopped using it and recompiled the whole server from scratch.

    Also I'm pretty sure this is not a client-side software issue since one of my players played on the same computer on two different networks and one of them worked and another didn't

    The thing you linked mentioned something about view distance which I currently have set to 16. I'll try later to decrease it if it might be something with the client side network/software being flooded with too much data trying to load the world or something like that.

    Also the mentions about increased ping and increased server load have bothered me too and actually made me swap hosts since the old host's CPU config couldn't withstand the increased load and would lock up the server. One of my moderators while viewing TPShistory from LagMonitor actually noted that the server TPS dipped a small bit (from 20/20/20 to 19.98/19.98/19.98 or so) bit when some players were on and I actually just realized that at least one of the players who cause the slight dip in his opinion also has the connection issues.

    Thanks for the replies guys!
     
  8. electronicboy

    IRC Staff

    That issue affects spigot more than it does in paper right now due to my patches against paper, the issue basically falls in that clients now have X amount of time to connect, that includes processing all of the chunks on their end so that they can reply in time to the keepalive before they're booted, if a client fails to reply in time: e.g. they have a poor connection, or your server is just sending way too much for them to reply to the keepalive in time, they'll disconnect due to a timeout (which spigot won't log much useful beyond the standard disconnect method)

    Increased ping is a bit of a misnomer for paper, there are no changes which would affect the ping, just during initial connection because of the keepalive changes and how the ping is averaged out it can show a bit high for a while, I've added something to try and help against this in paper, but it's not really perfect. For spigot, there is a chance of the latency showing higher due to spigot queueing keepalives to be processed on the main thread as opposed to handling them async as they should be.

    You're likely not going to see any improvements on this from spigot, but that issue I linked should be a bit of a good read, especially the last comment
     
  9. Thanks for the info @electronicboy !

    The theory about the keepalive timeout sounds credible but I have a few questions:

    - Musn't there be something weird about the new hosts network (Hertzner) for it to be the keepalive timeout problem.
    It's hard to imagine that upgrading the server and moving the server folder as is would suddenly cause this if it wasn't a problem with the hosts network or the host machine's network adapter/driver (It's Realtek adapter, I know it's pathetic and I'll look into switching to a dedi with an Intel NIC at the end of the current dedi's billing period just to be sure).
    It also feels weird that going from 100Mb/s to a Gigabit would actually make things worse, even though the server is a few hundred kilometer more inland than the old one so a longer distance and more jumps for NA players.

    - I've actually had players who've had bad connections get disconnected with the keepalive timeout error, but it would show me the keepalive timeout warning on the console. I'm currently using the plain Spigot and don't actually know if Spigot shows the keepalive timeout warning separately like Paper? I remember the warnings being coloured on the console so I must have been using Paper when I've seen them way before any of these problems.

    I had the disconnect myself actually so now I have client log to paste here, yay!:

    Code (Text):
    [20:08:55] [Netty Client IO #18/ERROR]: NetworkDispatcher exception
    java.io.IOException: An existing connection was forcibly closed by the remote host
        at sun.nio.ch.SocketDispatcher.read0(Native Method) ~[?:1.8.0_25]
        at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:43) ~[?:1.8.0_25]
        at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223) ~[?:1.8.0_25]
        at sun.nio.ch.IOUtil.read(IOUtil.java:192) ~[?:1.8.0_25]
        at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:379) ~[?:1.8.0_25]
        at io.netty.buffer.PooledUnsafeDirectByteBuf.setBytes(PooledUnsafeDirectByteBuf.java:288) ~[PooledUnsafeDirectByteBuf.class:4.1.9.Final]
        at io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:1100) ~[AbstractByteBuf.class:4.1.9.Final]
        at io.netty.channel.socket.nio.NioSocketChannel.doReadBytes(NioSocketChannel.java:372) ~[NioSocketChannel.class:4.1.9.Final]
        at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:123) [AbstractNioByteChannel$NioByteUnsafe.class:4.1.9.Final]
        at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:624) [NioEventLoop.class:4.1.9.Final]
        at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:559) [NioEventLoop.class:4.1.9.Final]
        at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:476) [NioEventLoop.class:4.1.9.Final]
        at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:438) [NioEventLoop.class:4.1.9.Final]
        at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) [SingleThreadEventExecutor$5.class:4.1.9.Final]
        at java.lang.Thread.run(Thread.java:745) [?:1.8.0_25]
    [20:08:55] [Netty Client IO #18/INFO]: Scheduling client settings reset.
    [20:08:55] [main/INFO]: Applying holder lookups
    [20:08:55] [main/INFO]: Holder lookups applied
    [20:08:56] [main/INFO]: Client settings have been reset.
    Anything you could get from that? Is it the exact same issue as the GitHub post?

    EDIT: If you guys would be able to possibly connect to the server in the signature and see if you have the problems. If you have the problems I would appreciate if you could help me get the MTR report that the host requires to maybe get a bit more help from them (maybe they'd even let me swap to a Intel NIC option if I told them that the last host which probably had an Intel NIC didn't have such a problem)
     
    #10 Tundral, Apr 26, 2018
    Last edited: Apr 26, 2018
  10. [QUOTE = "Tundral, post: 2980681, member: 250354"] Спасибо за информацию [USER = 32489] @electronicboy [/ USER]!

    Теория о тайм-ауте keepalive звучит правдоподобно, но у меня есть несколько вопросов:

    - Не должно быть чего-то странного в новой сети хостов ( Герцнер ), чтобы это было проблемой тайм-аута keepalive.
    Трудно представить, что обновление сервера и перемещение папки сервера, как есть, внезапно вызвало бы это, если бы не было проблем с сетью хостов или сетевым адаптером / драйвером хост-машины (это адаптер Realtek, я знаю, что это жалко, и я ' Я расскажу о переходе на dedi с сетевым адаптером Intel в конце расчетного периода текущего dedi, чтобы быть уверенным).
    Также кажется странным, что переход от 100 Мбит / с к гигабиту на самом деле усугубит ситуацию, даже несмотря на то, что сервер на несколько сотен километров больше внутреннего, чем старый, поэтому большее расстояние и больше прыжков для игроков NA.

    - У меня действительно были игроки, у которых были плохие соединения, отключенные с ошибкой времени ожидания keepalive, но это показало бы мне предупреждение о тайм-ауте keepalive на консоли. В настоящее время я использую обычный Spigot и не знаю, показывает ли Spigot предупреждение о тайм-ауте keepalive отдельно, как Paper? Я помню, как предупреждения были окрашены на консоли, поэтому я, должно быть, использовал бумагу, когда видел их намного раньше, чем любая из этих проблем.

    У меня было отключение на самом деле, так что теперь у меня есть журнал клиента, чтобы вставить здесь, ура !:

    Code (Text):
     [20:08:55] [NetO Client IO # 18 / ERROR]: исключение NetworkDispatcher
    java.io.IOException: существующее соединение было принудительно закрыто удаленным хостом
        at sun.nio.ch.SocketDispatcher.read0 (собственный метод) ~ [?: 1.8.0_25]
        at sun.nio.ch.SocketDispatcher.read (SocketDispatcher.java:43) ~ [?: 1.8.0_25]
        at sun.nio.ch.IOUtil.readIntoNativeBuffer (IOUtil.java:223) ~ [?: 1.8.0_25]
        at sun.nio.ch.IOUtil.read (IOUtil.java:192) ~ [?: 1.8.0_25]
        at sun.nio.ch.SocketChannelImpl.read (SocketChannelImpl.java:379) ~ [?: 1.8.0_25]
        в io.netty.buffer.PooledUnsafeDirectByteBuf.setBytes (PooledUnsafeDirectByteBuf.java:288) ~ [PooledUnsafeDirectByteBuf.class: 4.1.9.Final]
        на io.netty.buffer.AbstractByteBuf.writeBytes (AbstractByteBuf.java:1100) ~ [AbstractByteBuf.class: 4.1.9.Final]
        в io.netty.channel.socket.nio.NioSocketChannel.doReadBytes (NioSocketChannel.java:372) ~ [NioSocketChannel.class: 4.1.9.Final]
        в io.netty.channel.nio.AbstractNioByteChannel $ NioByteUnsafe.read (AbstractNioByteChannel.java:123) [AbstractNioByteChannel $ NioByteUnsafe.class: 4.1.9.Final]
        в io.netty.channel.nio.NioEventLoop.processSelectedKey (NioEventLoop.java:624) [NioEventLoop.class: 4.1.9.Final]
        в io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized (NioEventLoop.java:559) [NioEventLoop.class: 4.1.9.Final]
        в io.netty.channel.nio.NioEventLoop.processSelectedKeys (NioEventLoop.java:476) [NioEventLoop.class: 4.1.9.Final]
        на io.netty.channel.nio.NioEventLoop.run (NioEventLoop.java:438) [NioEventLoop.class: 4.1.9.Final]
        в io.netty.util.concurrent.SingleThreadEventExecutor $ 5.run (SingleThreadEventExecutor.java:858) [SingleThreadEventExecutor $ 5.класс: 4.1.9.Final]
        на java.lang.Thread.run (Thread.java:745) [?: 1.8.0_25]
    [20:08:55] [Netty Client IO # 18 / INFO]: планирование сброса настроек клиента.
    [20:08:55] [main / INFO]: применение поиска держателей
    [20:08:55] [main / INFO]: поиск по держателю применен
    [20:08:56] [main / INFO]: настройки клиента были сброшены. [/ CODE]

    Что-нибудь, что вы могли бы получить от этого? Это та же самая проблема, что и на GitHub?

    РЕДАКТИРОВАТЬ: Если вы, ребята, сможете подключиться к серверу в подписи и посмотреть, если у вас есть проблемы. Если у вас есть проблемы, я был бы признателен, если бы вы могли помочь мне получить отчет MTR, который требуется хосту, чтобы, возможно, получить от них немного больше помощи (возможно, они даже позволили бы мне переключиться на вариант Intel NIC, если бы я сказал им, что последний хост, который, вероятно, имел сетевой адаптер Intel, не имел такой проблемы) [/ QUOTE]
    проблема была решена?
    the problem was solved?
     

Share This Page