Skip to content

Resource Usage

Is the Apprise API Docker container using more RAM or memory than you expect? The table below maps your usage level to the right settings. For most personal and hobbyist deployments, a few environment variables are all it takes.

ProfileDaily NotificationsAPPRISE_WORKER_COUNTAPPRISE_WORKER_MAX_REQUESTSExpected RAM
Hobbyist1 – 50150~150–180 MB
Light51 – 5001200~180–220 MB
Medium501 – 5,0002500~330–400 MB
Heavy5,001 – 20,0003–41000 (default)~500–700 MB
High-volume20,000+(auto)1000 (default)varies

The following provides an example of how you can apply the settings to your deployment. Choose the tab that matches your setup.

Terminal window
docker run --name apprise \
-e APPRISE_WORKER_COUNT=1 \
-e APPRISE_WORKER_MAX_REQUESTS=50 \
-p 8000:8000 \
-v ./config:/config \
-d caronc/apprise:latest

The container always runs three processes regardless of settings:

ProcessRAMNotes
Nginx~25 MBReverse proxy
Supervisord~10 MBProcess manager
Gunicorn worker~115–145 MBPython + Django + all Apprise plugins

The worker is the main driver. Apprise loads all 137 notification services at startup — even ones you will never use. This is what creates the fixed baseline. The core Python, Django, and service scaffolding always loads; however, optional third-party libraries used only by specific services can be evicted at startup if those services are disabled (see Reducing Memory Further with Service Filtering).

The default worker count is (2 × CPU cores) + 1. On a 2-core host that is 5 workers, which can push usage to 700 MB or more before a single notification is sent. Reducing to APPRISE_WORKER_COUNT=1 has the most effect on memory use.

Python’s internal allocator retains freed memory rather than returning it to the OS immediately — this is normal, not a leak. Memory is only fully released when a worker restarts.

APPRISE_WORKER_MAX_REQUESTS controls how many requests a worker handles before restarting. With the default of 1000 and only a handful of notifications per day, workers may run for months without ever recycling. Setting this to a lower value (e.g., 50) ensures periodic restarts that keep memory closer to the startup baseline.

APPRISE_WORKER_MAX_REQUESTS_JITTER adds a random offset to each worker’s restart threshold to prevent all workers from recycling simultaneously.

  • Single-worker deployments: jitter has no effect. The default of 50 is harmless, or you can set it to 0.
  • Multi-worker deployments: leave jitter at the default 50, or scale it proportionally if you lower APPRISE_WORKER_MAX_REQUESTS significantly (e.g., MAX_REQUESTS=50JITTER=10).

Jitter does not affect memory usage — only APPRISE_WORKER_COUNT and APPRISE_WORKER_MAX_REQUESTS do.

VariableDefaultDescription
APPRISE_WORKER_COUNT(2×CPUs) + 1Number of Gunicorn workers. Set to 1 for low-resource deployments.
APPRISE_WORKER_MAX_REQUESTS1000Requests before a worker restarts and releases accumulated memory.
APPRISE_WORKER_MAX_REQUESTS_JITTER50Random offset per worker to stagger restarts. Irrelevant for single-worker setups.
APPRISE_WORKER_TIMEOUT300Worker timeout in seconds.

See the Environment Variables reference for a full list.

Advanced: Reducing Memory Further with Service Filtering

Section titled “Advanced: Reducing Memory Further with Service Filtering”

If you only use a small set of notification services, you can reclaim additional memory by telling the API which services you actually need. The Apprise API will evict the optional libraries used exclusively by the disabled plugins from memory at startup.

Terminal window
APPRISE_ALLOW_SERVICES=tgram,ntfy

Libraries that are no longer needed by any enabled plugin are automatically removed from Python’s module cache (sys.modules). The savings compound with APPRISE_WORKER_COUNT=1:

LibraryUsed ByFreed Memory
slixmppxmpp://~20 MB
pahomqtt://~4 MB
gntpgrowl://~2 MB
smpplibsmpp://, smpps://~2 MB
hidblink1://~2 MB
pgpymailto://, mailtos:// (PGP only)~10 MB
cryptographysimplepush://, fcm://, vapid://partial†

cryptography links against OpenSSL natively. The Python wrapper objects are released, but the underlying shared library remains mapped by the OS for the process lifetime.

For full details on how this works and configuration examples, see Memory Impact of Service Filtering.

Questions or Feedback?

Documentation

Notice a typo or an error? Report it or contribute a fix .

Technical Issues

Having trouble with the code? Open an issue on GitHub:

Made with love from Canada