Thank you for your feedback.
1.
This should only happen when you save settings. So it should be rare.
As the plugin can’t tell what has changed on your system, the check runs on every save on/off this is to prevent accidentally deleting/overwriting existing page cache.
2.
Very good, you are right.
Will be changed on next update.
3.
For memory the threshold is 50% so as long as your usage is >50% you will get the hint.
4.
Difficult to say without context, but you can try:
opcache.use_cwd=0
opcache.revalidate_path=0
If your memory usage is above 75%, increase opcache.memory_consumption
Your opcache.jit_buffer_size is much too high!
8 or even 1 MB will probably be enough (check usage) – or disable JIT, as WP will almost not benefit from JIT at all.
Thank you for your reply. Regarding (1) as I said, the message is always displayed. I attach screenshot. I also attach another screenshot, I really cant understand a) Why memory/used is zero. And also why recommended is zero. I double checked it and in php.ini shows the correct value (now 512M). Used to work fine. https://postimg.cc/gallery/KbbyJJ8
Did you ever hide the msg by clicking the X on the right?
Please download latest version and let me know.
Of course I did 🙂
Anyway this is not an important issue. But let me mention some other things.
When I was talking about misses I was referring to apcu, not opcache (and one suggestion: It would be nice if you also showed fragmentation in your info)
https://postimg.cc/xJgXtkYz
Also in your info plugin, opcache is shown as zero, when it is 512M (memory 0, used 0, free 249M).
https://postimg.cc/hhJZFVzb
Here is my updated config.
[opcache]
opcache.enable=1
opcache.memory_consumption=512M
opcache.interned_strings_buffer=64
opcache.max_accelerated_files=50000
opcache.max_wasted_percentage=10
opcache.validate_timestamps=0
opcache.revalidate_freq=60
opcache.enable_cli=0
opcache.save_comments=0
opcache.load_comments=0
opcache.revalidate_path=1
opcache.use_cwd=1
opcache.jit_buffer_size=1M
opcache.jit=1235
[apcu]
apc.enabled=1
apc.shm_segments=1
apc.shm_size=128M
apc.max_file_size=1M
apc.stat=1
apc.ttl=3600
-
This reply was modified 4 months ago by
fesarlis.
-
This reply was modified 4 months ago by
fesarlis.
-
This reply was modified 4 months ago by
fesarlis.
opcache.memory_consumption
You do not specify units in the config; PHP assumes MB by default for this directive.
So try changing to opcache.memory_consumption=512M
However v1.8.31 has a parser to even except values with a unit – so please update.
Fragmentation:
“APCu doesn’t provide a standard fragmentation metric. Any percentage would be an approximation and different tools calculate it differently, which tends to confuse more than it helps. For that reason I’m not planning to display a fragmentation %.”
I already use 1.8.31, and I have also tried it without unit. Same output.
Any comment on how to adjust config to reduce misses? I notice that after a while there is a reduction, but it is still too high (approx 35%). Sorry if this is not actually your role (to tell us how to use apcu) but you obviously are far more experienced on this than most of us.
Have a merry Christmas and happy new year !
-
This reply was modified 4 months ago by
fesarlis.
We need to be precise in what we are talking about.
If you changed opache settings and mem is still 0 you most likely have another .ini overwriting settings. Try atec-system-info and check which settings are effective.
Also flush the opcache so scripts get reloaded.
As for APCu, if you mean cache misses – that depends on the cache usage. Are you using it as an object cache or for storage? Or both?
A miss only means that data is not stored but requested. So if some script run apcu_fetch(“foo”) but never apcu_store(“foo”, “bar”) … that will always result in high misses.
But the main issue is most likely that APCu is per PHP worker not per machine. If you have multiple workers running, you might hit different cache per request, thus you will get different results and only over time reach a higher hit rate – if ever.
Remember: High APCu miss rates are a natural side-effect of per-worker caches, not a sign of misconfiguration.
That said, your 128M may be too high, depending on your pm.max_children and machine memory.
MC to you too.
-
This reply was modified 4 months ago by
docjojo.