It does run with caches enabled, my memory is murky but I remember Go processing requests on an order of magnitude faster (10x). Maybe it's about dependencies reconnecting on each request, or the large amount of classes in the monolith? I remember marking dependencies as "lazy load" helps.
>there are OPCache tweaks, preloading facilities which help improve the request initialization performance
I remember we disabled some of the settings for preload because we hit a bug which manifested as a segfault due to PHP's shared memory space for preload getting corrupted under a high load.
In any case, everything is fast and usually doesn't require a lot of tuning in Go out of the box.
However, I do love PHP's shared nothing architecture for the reasons of memory isolation: we have thousands of B2B tenants and with PHP, I'm confident we won't accidentally spill one company's data into another company's account. Things like when ChatGPT exposed your conversations to random people because a bug in the Redis connection pool inside Python's shared memory space returned a connection for a different user, due to a race condition.
>there are OPCache tweaks, preloading facilities which help improve the request initialization performance
I remember we disabled some of the settings for preload because we hit a bug which manifested as a segfault due to PHP's shared memory space for preload getting corrupted under a high load.
In any case, everything is fast and usually doesn't require a lot of tuning in Go out of the box.
However, I do love PHP's shared nothing architecture for the reasons of memory isolation: we have thousands of B2B tenants and with PHP, I'm confident we won't accidentally spill one company's data into another company's account. Things like when ChatGPT exposed your conversations to random people because a bug in the Redis connection pool inside Python's shared memory space returned a connection for a different user, due to a race condition.