small medium large xlarge

Back to: All Forums  Release It!
Generic-user-small
14 Oct 2015, 15:04
Tim Romberg (1 post)

Overall this book has been a great read so far. So I am very surprised about some of the statements in section 9.1 which to me appear as though the author misunderstands a few things about threads and performance. Perhaps I am not reading the most up-to-date version? (I am reading it on O’Reilly Safari, there is no version number indicated):

  1. “By the time thirty requests are active, more than 80% of CPU time is spent uselessly waiting for one of the connections to become available”.

I assume the author would know that a thread that waits for a resource to become available does not consume CPU time. On any application server with a reasonable configuration, the majority of threads at ANY given point in time is waiting, either for a resource, or for a time slice (of course, a thread dump will not show the latter case as waiting).

  1. “To guarantee this, make the resource pool size equal to the number of threads”

According to my understanding, the total throughput capacity of a DB server will not increase, but decrease when you increase the number of threads / processes beyond a certain point - even a single-digit figure - especially if we are talking about the threads that actually handle queries. This is because a higher percentage of time is spent for context switching (paging data in and out of RAM / CPU cache, moving a disk head) and not actually serving the query. If every query used the same amount of resources, it would be optimal to use just enough threads to fully utilise the available hardware, and get each query processed and out of the door as quickly as possible. Even if many queries will have to wait at first, the total throughput is maximized, and average processing time minimized. In practice one uses more threads/processes to avoid impact of expensive queries on inexpensive ones. Kanban follows the same principle, keeping work-in-progress as low as possible while accounting for variability in the demand. So, if a majority of application server requests really make calls to the DB server, one should rather think about decreasing the number of threads on the application server (thereby decreasing WIP and context switching there also) than increasing the number of threads on the DB server. The number of threads in the app layer is usually higher because many requests don’t hit the DB (level 2 cache etc.).

You must be logged in to comment