RHEL / DB2 can handle the scale, the considerations aren’t so much the software as it is the network architecture and hardware performance.
For a deployment of that size, I’d suggest engaging Professional Services to consult. Contact your TA or private message me if you’d like some info on how to get started.
If you are going to proceed on your own, I’d strongly recommend a detailed read-through of the Capacity and Planning Guide at https://bigfix-mark.github.io/ which will help drill-down on what kind of performance you’ll need from your server and storage, and has some benchmark tools to help determine your capacity limits on whatever platform you are using.
I’d also point out that as of today, Insights is only an option on the SQL-based deployments, not on DB2.
There are a lot of considerations, listed in the Capacity guide, but at that scale storage speed is likely to be of most concern. All the deployments I’ve used at over 100k endpoints really benefit from RAID SSD storage - and if you can get NVMe SSD RAID that would be best, especially on the database storage.
Also plan to keep your Consoles close to the Root Server - we usually set up a Terminal Server to run the consoles and keep it on the same subnet as the root server itself to reduce console latency.