As a back end web developer and infrastructure architect, I can tell you that things are not quite as simple as throwing more resources at the problem. That is especially untrue when it comes to the mythical "bandwidth". Bandwidth refers to the amount of data transferred over a connection over a period of time. The site going down is 99.999% unlikely to be a "bandwidth" issue as the connections are more than capable of handling more traffic. The problem will be a bottleneck somewhere. On a site like this things like CSS and JavaScript, as well as media assets usually do not change very often, so they can be heavily cached. They are often referred to as static assets, because they are shared across all or a subset of users on the site and they do not change. But on a forum, there's a lot that cannot be cached for long, or under some circumstances at all. For example, what threads and messages etc have been read by each user. Therefore there's a bunch of overhead to do database lookups to check when the last time a thread was accessed by the logged in user vs the last time the thread was posted to. Also the active user list. Any direct messages. Your individual user account menu. etc etc. There's a lot of reading from and writing to the databases for every user as they navigate the site. Scaling databases is do-able, but hard. You end up in a situation where more than one database connection is trying to write to the same database table, or even row. In cases like this you typically lock the row for writing until the previous transaction is complete. Subsequent requests then have to sit waiting being blocked by earlier requests. This is likely where this site falters. Now you can get around this using various techniques, but technically it can become difficult if for example you have multiple database servers handling different requests because keeping those servers in sync is a challenge. Also reliable, db scaling is not super cheap. Especially if you are load balancing across different geographic zones (like having servers in the western and eastern US, Europe and Asia).
In front of this site is Cloudflare which is a service which does things like DDoS protection, DNS, and reverse proxy caching to help with attacks and handle spikes in traffic, but if the underlying servers struggle to keep up, Cloudflare can only do so much. As stated earlier, my hunch is blocking at the database level causing the site to struggle during high traffic times when lots of people are posting.