- we previously used `@stdlib/utils` instead of the child package
`@stdlib/copy`, which is a lot smaller and contains our only use of
the parent
- this saves 140+MB of dependencies
- we keep ending up with multiple versions of the depedency in our tree,
and it's causing problems when comparing instances
- the workaround I'm implementing for now is to bump the package
everywhere and set a resolution so we only have 1 shared instance
- hopefully we can come up with a better method down the line
- there's a weird situation when we have mixed versions of the
dependency because different libraries try to compare instances
- this brings the usage up to 1.2.21 so we can fix the build for now
refs https://github.com/TryGhost/Toolbox/issues/501
- this reverts commit 48dda23554
- also includes a resolution for `@elastic/elasticsearch` so we don't
run a version that is potentially problematic - see referenced issue
for context
fixes https://github.com/TryGhost/Team/issues/2366
refs https://ghost.slack.com/archives/C02G9E68C/p1670232405014209
Probem described in issue.
In the old MEGA flow:
- The `email_verification_required` check is now repeated inside the job
In the new email service flow:
- The `email_verification_required` is now checked (didn't happen
before)
- When generating the email batch recipients, we only include members
that were created before the email was created. That way it is
impossible to avoid limit checks by inserting new members between
creating an email and sending an email.
- We don't need to repeat the check inside the job because of the above
changes
Improved handling of large imports:
- When checking `email_verification_required`, we now also check if the
import threshold is reached (a new method is introduced in
vertificationTrigger specifically for this usage). If it is, we start
the verification progress. This is required for long running imports
that only check the verification threshold at the very end.
- This change increases the concurrency of fastq to 3 (refs
https://ghost.slack.com/archives/C02G9E68C/p1670232405014209). So when
running a long import, it is now possible to send emails without having
to wait for the import. Above change makes sure it is not possible to
get around the verification limits.
Refactoring:
- Removed the need to use `updateVerificationTrigger` by making
thresholds getters instead of fixed variables.
- Improved awaiting of members import job in regression test
- this was all getting terribly behind so I've done several things:
- majority of `@tryghost/*` except Lexical packages
- gscan + knex-migrator to remove old `@tryghost/errors` usage
- bumped lockfile
refs https://github.com/TryGhost/Team/issues/2339
- Includes a new pattern in the job manager that allows us to properly
await jobs.
- Added new convenience mocking methods to stub settings
- Tests the main flows for bulk sending:
- Sending in multiple batches
- Sending to multiple segments
- Handling a failed batch and retrying that batch
- Fixes bug in batch generation (ordering not working)
In a different PR I'll add more detailed tests.
fixes https://github.com/TryGhost/Team/issues/2310
This moves the processing of the events from the event-processor to a
new email-event-processor in the email-service package.
- The `EmailEventProcessor` only translates events from
providerId/emailId to their known emailId, memberId and recipientId, and
dispatches the corresponding events.
- Since `EmailEventProcessor` runs in a separate worker thread, we can't
listen for the dispatched events on the main thread. To accomplish this
communication, the events dispatched from the `EmailEventProcessor`
class are 'posted' via the postMessage method and redispatched on the
main thread.
- A new `EmailEventStorage` class reacts to the email events and stores
it in the database. This code mostly corresponds to the (now deleted)
subclass of the old `EmailEventProcessor`
- Updating a members last_seen_at timestamp has moved to the
lastSeenAtUpdater.
- Email events no longer store `ObjectID` because these are not
encodable across threads via postMessage
- Includes new E2E tests that test the storage of all supported Mailgun
events. Note that in these tests we run the processing on the main
thread instead of on a separate thread (couldn't do this because
stubbing is not possible across threads)
There are some missing pieces that will get added in later PRs (this PR
focuses on porting the existing functionality):
- Handling temporary failures/bounces
- Capturing the error messages of bounce events
closes https://github.com/TryGhost/Toolbox/issues/402
- The SQL error was thrown whenever a job error was happening and was trying to persist an error. Persisting an error should only happen for "named" one-off jobs, instead of just one-off jobs.
fixes https://github.com/TryGhost/Ghost/issues/15190
refs https://github.com/TryGhost/framework/pull/76
- log output always uses UTC timestamps, but it may be desirable to
configure logs to use the local machine timezone
- a new config option has been added to `@tryghost/logging` so you can
switch the logs to the local timezone
- this commit bumps the package and sets the default config option to
`false`, so it doesn't suddenly change the timezone of the logs
- docs will be updated soon but if you'd like to use the
timezone-altered timestamps, you can set `logging.useLocalTime` to
`true`
- credits to https://github.com/levee223 for the implementation and PR
- cleaned up unused dependencies
- adds missing dependencies that are used in the code
- this should help us be more explicit about the dependencies a package
uses
- because of how the npm scripts were set up, we were running the full
Admin integration tests during the unit tests phase of CI
- this commit renames the majority of `test` to `test:unit` in the
package.json files, and aliases `test` to `test:unit`
- special packages like Admin have no-op'd `test:unit` scripts so we
don't end up running its tests
refs https://github.com/TryGhost/Team/issues/1723
- Added count.replies to comments
- Added replies endpoint
- Limited returned replies to 3.
- Replaced likes_count with count.likes in comments
- Instead of fetching all the likes of a comment to determine the total count, we'll now use count.likes
- Instead of fetching all the likes of a comment to determine whether a member liked a comment, we'll now use count.liked (which returns the amount of likes of the current member, being 0 or 1). This is mapped to `liked` to make it more natural to work with.
The `members.test.snap` file changed because we no longer include `liked: false` if we didn't fetch the liked relation. And in the comments events of the activity feed the liked property is therefore removed.
These changes requires an update to the `bookshelf-include-count` plugin:
- Updated to also work for nested relations
- This moves the count queries from the `bookshelf-include-count` plugin to the `countRelations` method of each model.
- Updated to keep the counts after saving a model (crud.edit didn't return the counts before)
fixes https://github.com/TryGhost/Toolbox/issues/370
- we no longer need `bthreads` because we can use native
`worker_threads` now we don't have to support Node 10 any longer
- this allows us to clean up a dependency and stick with native
libraries
- the referenced node-sqlite3 issue should be fixed (or at least, we now
maintain it so we can fix it if not)
no issue
- Small "boyscout" improvements that were noticed while developing a new feature. This boosts the execution time of the test suit a little bit.
refs https://github.com/TryGhost/Toolbox/issues/359
refs 1606a10ff8
- One-off jobs have been released and needed a little bit of documentation for engineers to find their feet quick with a new concept.
- One-off jobs have a quality of executing only ever once within the lifetime of Ghost instance. For example this feature enabled moving members-migrations from the main path of boot process - boosts the boot time significantly (refed commit)
refs https://github.com/TryGhost/Toolbox/issues/358
- When a one-off job fails it could be restarted during the next call, given it has been cleared from the job queue.
- This readding WILL NOT work for jobs that are restarted within same process (while being kept in the bree's queue). It's specifically targetting one-off jobs like migrations that **might** fail and are only added once per process lifetime.