351 559 tasks has been processed this Sunday, our queue is now fully processed.

Our instance had issues with Redis cache and then PostgreSQL indexes.

The issues are solved, and the situation is monitored.

@lizsmells @tradeforexcopier I'm not sure I can tell, I'm not sure to be an organic human either ^^

@lizsmells Thanks for your support :) The donation is appreciated.

@Muriel How the urban public space is organized is in my interests.

#maintenance Show more

Dereckson boosted

Maybe we were beautiful- the way the first waves of tsunami are, the way the flames consuming dying embers are, the way two stars before collision- like a tragic wildfire, destined to consume.

- poetry by me!

@benoit Online has always had issues with their consoles.

There is a fun variant: when the serial console rate is too fast and the OS logs out the console session every 20 seconds.

You've 20 seconds to do your task :p

Queue backlog has been fully processed at 19:46 UTC.

Since then, our instance process events normally.

The home feeds have been rebuilt so we're sure they are coherent and in chronological order.

All looks good since.

@thaj @johnnynull There is still 95 402 jobs to process

Here how it works:
1. it pulls the status
2. it prints it on the federated timeline
3. it populates new jobs for notifications, timeline updates, check links, check replies, etc.

I'll rebuild all the home timeline when the queue is at's 0.

@StuC @lizsmells @thaj Yup that was the main issue: DM is also a timeline to push to.

So the big question, how to prepare a Nagios check triggering when a job is stuck in a Sidekiq queue after a delay.

And a check the queue is greater than a specified threshold.

Timeline is currently updating again.

I increased the workers amount, provisioning a new machine to temporarily help to process the queue. With that, we currently process 30 tasks / seconds.

That means with 130 000 tasks to process, plus a little bit for the new tasks incoming, in 2 hours, queue will be empty.

We've got a queue incident on the Mastodon instances.

Some users on plemora pushed HUGE mp4 videos, and the queue was stucked to process them.

That gave processes like 97.2% CPU and 300 Mb RAM running dozens of minute hour each (the server isn't optimized for video rendering).

And so while video convert occurred, all the workers are busy, and we don't have timeline updates.

No data is lost, as all is saved in Redis as jobs to process.

Show more
Social Nasqueron

Nasqueron is a budding community of creative people, writers, developers and thinkers. We focus on free culture, ethics and to be a positive change. We share values like respect, justice and equity.