The internal code that assembles all table text into one string has been optimized. The slowdown was discovered when there were ~2000 users in the BFM file: /usr/local/directadmin/data/admin/brute_user.data because the BFM User table (top right) shows all rows on one page, allowing for a very large table. The old code caused the process appending the many small string to the end slower and slower, then larger the string got. The new code keeps track of the tail end, so the entire string size isn't re-computed each time a string segment is added. Before the optimization, it took the table 70 seconds to build, and now it's instant (For the 2000 User test). This will affect all dynamically generated tables in DA. For most cases, if you're only looking at 50 rows, the table isn't big, so wouldn't be too noticeable. But anytime you select "All" (Advanced Search) ... or a table is showing many rows on one page, that's when this change will really be noticeable. Also re-wrote the title, cell (td) structs, by adding a "row" struct (tr), to use less ram. Previously, each cell would hold variables for the row, duplicating that variable more than was needed. The new row struct stores all of the per-row settings, and then each cell has the linked list form there, with just the info the cells need. And also merged 3 int's into 1, by using binary masking, (checkbox settings for the row) For many cases, a td cell "value" (for searching/sorting) was not what we actually wanted to display. For example, a filesize, in bytes, but we want it to be human readable. Or a unix timestamp, but we want the formatted day/month/year. Previously, it use a sort of hidden a href hack: <a href="value"></a>Actual String to hide the value, and show the Actual String. Another td cell change was the addition of a simpler override string.. so we don't need any of that mess, making the tables smaller. For tests, I decided to ramp it up to a more extreme case to see how it would perform. I added ~77,000 random row entries. The sorting of that data only took 0.07 seconds. And the string assembly took 0.55 seconds, creating a monster 29.1MB table string. (vs 70 seconds for 2000 rows) Obviously, no table should show that many rows on one page, but regardless, I'm pleased with the results. Can also use the BFM, "Avanced Search", on the log entires table, show "All" entries. For me, a 26 page table (meaning 50 x 26 rows), now loads almost instantly when all entries are displayed on one page. Using the Chrome "Network" debug page, the start of the download for this "All" view starts ~350ms after the request. In comparsion, the main / page started ~250ms after the request.. So reading in all brute data (IP, Users, 26 pages of logs into 1 page, skip list, blocked IPs), AND assembling all tables/skins only takes 100ms. I didn't take stats on this from before, but I know it was a bit sluggish even with only the single page (50 entries) being shown.