Mailsync crash while streaming metadata deltas

Description

To Reproduce…

Sync any account (happens on Outlook, GMail, Custom IMAP) for some time. Mailsync will crash. It restarts just fine, but the “mailsync.bin (part of Mailspring) has crashed” notification is annoying

Expected Behavior

Handle errors during metadata streaming gracefully, e.g. simply retry

Setup

  • OS and Version: Linux linusarch 6.19.6-arch1-1 #1 SMP PREEMPT_DYNAMIC Wed, 04 Mar 2026 18:25:08 +0000 x86_64 GNU/Linux
    • Installation Method: Flatpak, but also happens when building from source
  • Mailspring Version: 1.18.0

Additional Context

Mailsync log:

1161 [2026-03-08 13:15:17.377] [background] [info] Sync loop complete.
1161 [2026-03-08 13:16:10.076] [metadata] [info] Metadata delta stream closed.
1161 [2026-03-08 13:16:10.084] [metadata] [critical] 
***
*** Mailspring Sync 
*** An exception occurred during program execution: 
*** {"debuginfo":"https://id.getmailspring.com/deltas/edbcd900/streaming?p=linux&ih=imap.kit.edu&cursor=146487545","key":"Stream error in the HTTP/2 framing layer","offline":false,"retryable":false,"what":"std::exception"}
***

1161 [2026-03-08 13:16:10.084] [metadata] [critical] *** Stack trace (line numbers are approximate):
*** ??:?  stringbuf::~stringbuf()
*** ??:?  ctype::do_narrow(char, char) const
*** ??:?  stringbuf::~stringbuf()
*** ??:?  stringbuf::~stringbuf()
*** ??:?  stringbuf::~stringbuf()
*** ??:?  error_code::default_error_condition() const
*** ??:?  __clone()
***

1387 [2026-03-08 13:16:10.293] [main] [info] Identity created at 1545333558 - using ID Schema 1
1387 [2026-03-08 13:16:10.295] [main] [info] ------------- Starting Sync (linus.dierheimer@student.kit.edu) ---------------
1387 [2026-03-08 13:16:10.297] [background] [info] Marking all folders as `busy`
1387 [2026-03-08 13:16:10.297] [background] [info] Syncing folder list...
1387 [2026-03-08 13:16:10.297] [metadata] [info] Metadata delta stream starting...

GDB backtrace:

#0  __pthread_kill_implementation (threadid=<optimized out>, signo=signo@entry=6, no_tid=no_tid@entry=0) at pthread_kill.c:44
        tid = <optimized out>
        ret = 0
        pd = <optimized out>
        old_mask = {__val = {1}}
        ret = <optimized out>
#1  0x00007fc060e9d5e3 in __pthread_kill_internal (threadid=<optimized out>, signo=6) at pthread_kill.c:89
No locals.
#2  0x00007fc060e433be in __GI_raise (sig=sig@entry=6) at ../sysdeps/posix/raise.c:26
        ret = <optimized out>
#3  0x00007fc060e2a8ed in __GI_abort () at abort.c:77
        act = {__sigaction_handler = {sa_handler = 0x7fc057ffec80, sa_sigaction = 0x7fc057ffec80}, sa_mask = {__val = {0, 0, 140464236436177, 140464086838432, 12, 8098988873639554625, 139639829522804, 140463819265328, 218, 218, 37633081173614, 140463818864592, 14814468481968396032, 94706636439072, 94706636439072, 
              140463818411072}}, sa_flags = -1669600704, sa_restorer = 0x56229b6c3e20 <backoffSeconds>}
#4  0x000056229b196b54 in MetadataWorker::run (this=0x7fc048000c40) at /run/build/Mailspring/mailsync/MailSync/MetadataWorker.cpp:75
        ex = <optimized out>
#5  0x000056229b56d224 in std::execute_native_thread_routine (__p=0x56229c7e03a0) at ../../../../../libstdc++-v3/src/c++11/thread.cc:104
        __t = <optimized out>
#6  0x00007fc060e9b56a in start_thread (arg=<optimized out>) at pthread_create.c:448
        ret = <optimized out>
        pd = <optimized out>
        out = <optimized out>
        unwind_buf = {cancel_jmp_buf = {{jmp_buf = {140464086841024, -9040754288795044312, 140464086841024, 140733734852336, 0, 140733734852599, -9040754288769878488, -9040791588060514776}, mask_was_saved = 0}}, priv = {pad = {0x0, 0x0, 0x0, 0x0}, data = {prev = 0x0, cleanup = 0x0, canceltype = 0}}}
        not_first_call = <optimized out>
#7  0x00007fc060f1ee54 in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:100
No locals.

Seeing the same thing, both on a Fedora install and Snap install.

Hey folks, thanks for reporting this - I actually saw this go by on my machine last night as well, let me see if we can get to the root of this one. We did make some improvements to stderr management in the mailsync-process in the latest release (1.19.0) but I don’t think that will fully resolve this.

1 Like

Hey folks, one more update here. It looks like our backend service provider (fly.io) may have recently moved the default to HTTP/2, and the sync service doesn’t correctly identify these connection drops as “retryable: true” because they’re a different exit code than with HTTP/1.1.

We don’t get much upside from HTTP/2 for this, so I’ve changed the backend configuration to disable HTTP/2 for the time being. It looks like this has fixed the issue for me and I expect it’ll improve reliability for you as well. The next version of the sync engine will also correctly mark these as retryable connection errors for good measure!