And I guess this question is two parts: 1. Regarding the current lemmy implementation, and 2. The activityPub protocol in general

  • PriorProject@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Every complex system (and federated systems like Lemmy qualify) has more than one potential bottleneck that can become a problem in different conditions.

    • Right now, the common performance bottleneck for Lemmy instances is heavy database reads caused by users browsing. Many of these queries are written inefficiently and can be optimized, there are things that can be done in Postgres to scale as well. But browse traffic is one kind of workload load that can reach limits, and it gets stressed when lots users are active on one big instance.
    • Federated networks CAN experience federated replication load when there are lots of instances to deliver federation messages to. If I comment on this post, and the server hosting the community has to deliver the comment to (pinky to mouth) one million instances… that’s a different kind of workload and it gets stressed when there are lots of different instances subscribed to a single community.

    The goldilocks zone is where there is a medium number of medium sized instances. Then each federation message can efficiently power browse traffic for a lot of users, and no one instance gets overwhelmed with browse traffic.

    In practice, this is not how networks organize. There will both be instances that are “too large” and also lots of small instances. Right now, the Lemmy network is small and federation traffic is not a meaningful bottleneck. Browse traffic is, and that’s what the devs are working on. But with time, the limits of both these things can be pushed further out improving scalability of the etwork in both directions.