fixes PROD-205
fixes PROD-219
- removed the right placeholder complexity
- the preview renders the current sender / reply-to address in
real-time, regardless of whether it's valid
- if the reply-to address does not match the custom sending domain, show
an inline error
This updates the AdapterCacheRedis instance to be able to handle updating
itself when reading from the cache. For this to work we need to pass a
`fetchData` function to the `get` method.
In the case of a cache miss, we will read the data via the `fetchData`
function, and store it in the cache, before returning the value to the caller.
When coupled with a `refreshAheadFactor` config, we will go a step further and
implement the "Refresh Ahead" caching strategy. What this means is that we will
refresh the contents of the cache in the background, this happens on a cache
read and only once the data in the cache has only a certain percentage of the
TTL left, which is set as a decimal value between 0 and 1.
e.g.
ttl = 100s
refreshAheadFactor = 0.2;
Any read from the cache that happens _after_ 80s will do a background refresh
Having the code use `async/await` make it more readable, and extracting the
execution to a separate function make its easier to run in the background in the
future
This logic is so simple it isn't worth having the indirection of another class.
This also removes the indirection of wrapped getters/setters, which is useful
because otherwise we need to update the wrapper with new methods each time
theunderlying implementation is changed. There was a note about losing the
context of this, but I haven't found anywhere that the context is lost.
no issue
- potentially fixes a small performance issues to avoid the homepage of
your publication from being loaded should an href from Portal not exist
when loading Offers, that could cause flashing.
closes https://github.com/TryGhost/Product/issues/4247
- bumps `@tryghost/kg-default-transforms` with a fix to our de-nesting transform so ListNode is no longer ignored as a badly nested child node which can occur through copy/paste from other editors
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
fixes PROD-201
The issue was caused because we were searching the 'name' field instead of the 'title' field.
This also increases the performance when loading the posts:
- Makes sure no relations are loaded
- Only return the fields we actually need
- Stop using limit=all, and replaced it with network based search
refs https://github.com/TryGhost/Ghost/pull/19284
- previous commit had only partially rolled back usage of `errorInfo` variable but it wasn't caught because the line in question has linting disabled
The main changes are:
- Updating the pipeline to allow for doing a background refresh of the
cache
- Remove the use of the EventAwareCacheWrapper for the posts public
cache
### Background refresh
This is just an initial implementation, and tbh it doesn't sit right
with me that the logic for this is in the pipeline - I think this should
sit in the cache implementation itself, and then we call out to it with
something like: `cache.get(key, fetchData)` and then the updates can
happen internally.
The `cache-manager` project actually has a method like this called
`wrap` - but every time I've used it it hangs, and debugging was a pain,
so I don't really trust it.
### EventAwareCacheWrapper
This is such a small amount of logic, I don't think it's worth creating
an entire wrapper for it, at least not a class based one. I would be
happy to refactor this to use a `Proxy` too, so that we don't have to
add methods to it each time we wanna change the underlying cache
implementation.
Fix:- #19246
Resolved an issue with dark mode in the recommendation tabs footer by
making TailwindCSS adjustments. Previously, the footer stayed in light
mode when transitioning from light mode to dark mode.
**Changes Made:**
Implemented a TailwindCSS class for a dark background in the table
footer to ensure consistency with dark mode.
**Before Fix:**
![288238357-1801884e-a8a4-48ac-b59a-0b2260561cdd](https://github.com/TryGhost/Ghost/assets/24241624/787f695c-fa7c-4ae8-aa34-9c9c53df4686)
**After Fix:**
![Screenshot 2023-12-06 at 11 15
19 AM](https://github.com/TryGhost/Ghost/assets/24241624/4bb65ffe-735a-440c-801c-520f991585e6)
**This fix enhances the user experience when working with
Recommendations Tab in settings.**
Please let me know if any further adjustments or clarifications are
needed. Thank you!
no issue
The data generator created an offer for the free product. This caused an
error in admin UI because it couldn't find the tier for the offer.
This fixes the issue in both the data generator and the admin UI.
no issue
The data generator went out of memory when trying to generate fake data
for > 2M members. This adds some improvements to make sure it doesn't go
out of memory.
---------
Co-authored-by: Fabien "egg" O'Carroll <fabien@allou.is>
no issue
When creating the batches when sending an email, we log a message to
Sentry when there is an unexpected offset of 1% between creating the
email and actually creating the batch recipients. We used a method that
was not mapped in our Sentry proxy.
Location of error: ghost/email-service/lib/BatchSendingService.js:286
no issue
The members stats API is a lot more slower than the normal members count
cache, and we use the members count cache in a lot more places. So we
can cache it more.
no issue
When we open the editor, we fire 4 requests to fetch member counts. This
commit fixes this by replacing those calls with the members count cache
service.
no issue
- the members API endpoint by default adds `order by created_at` to the SQL queries which creates unnecessary overhead when we only care about a count because MySQL sorts the table before querying the single member
- specifying an explicit `order by id` overrides the default API behaviour
- locally with 2 million members the query times drop from >5sec to ~1sec
no refs
- member_count endpoint was queried multiple times on load
- moved to collectively use the member stats service
- prevented multiple tasks from being queued up to return count
- we want to pass in the schema tables instead of cross requiring them
from a different package because it means the package isn't standalone
and moving the code structure around breaks the data generator