On our way to fully understand how our software works and where there may be bottlenecks in our code, it’s quite handy to have at least one server instance ready with XDebug installed.
Since we’ve had some performance issues yesterday on fahrrad.de and the other shops, we’ve decided to profile the overview page. It’s the page with the highest database/cache usage and a lot of nested controller calls (sf2 subrequests) and even small bottlenecks on that page normally kill the page load time.
When we create profiles for later analysis with KCachegrind (KDE), we’re usually profiling the page without cache enabled once and then another profile with cache enabled. So we’re able to see where the initial page load might be further optimized and where the differences are to a fully cached page.
Loading the profile file with KCachegrind, I normally check the “Callee Map” first. I click on the main node from the Caller List on the left hand side: Then you get a nice image with loads of nested, colored rectangles, varying in size. Each of those rectangles represents one function call and its percentual amount of the processing time (in percent). If they are nested, it’s a nested call and the size correlates to the processing time percentage.
As you can see here:
there’s a lot of calls and three blocks in particular stick out of the mass: The’re massive. Like we-are-40-percent-of-the-processing-time-massive. They represent our config manager and its loading of Yaml files via symfony2 Yaml component. It’s a one-time thing, since the whole config stuff is cached in APC for one week, if there’s no deployment in between. So nothing really obvious here… Next I check the Callgraph for any red colored method calls. Since there’s nothing really bad besides our ConfigManager, we’ll check the cached version of the page.
As expected, the whole config stuff is gone, lots of small rectangles: But one thing that catches the eye is the large green block in the upper left, representing a pdo object with a query. That’s a real bummer… uncached call to the database.
Switching to the Callgraph reveals: It’s our realtime product availability check.
So here’s at least something we can check if it’s really necessary to make a database call everytime. Maybe store
this in redis or…
Besides looking at and interpeting nice images, there’s always The “Caller List” with all method calls and there you can check which ones use most of the processing time, how often are they called, …
As you can see: Using the XDebug profiler really could help you identify bottlenecks!