Hey @mkemp,
I gave the problem another shot and think that it still persists without a better cure. In the end, I got it working with some extra loops. Still, fixing the problem in Confluence itself would be good.
I run Confluence DC from the official docker containers. These tests were done with version 9.0.3.
What I’ve done / Checklist:
- Upgraded AMPS to version 9.1.4 in the POM
- Made sure
<enableQuickReload>
is present in the POM
- Made sure the QR plugin is installed
- Made sure the DC instance is in dev mode and has the right
plugin.resource.directories
Initial result: it did not work out of the box. It looks like the webresource-temp/
folder gets populated and is used before the plugin.resource.directories
is used.
I am convinced that this is a bug in Confluence that can easily be fixed - somebody just messed up the priorities on where to look for resources first.
Additionally, removing the webresource-temp
folder floods the Confluence logs with exceptions about file access problems in the newer Confluence versions. So, this is no longer a good workaround.
How I got it working
For you guys in the same situation, this is how I got it working:
I’ve created a small shell script that watches the frontend build output folder (the one that is mounted as plugin.resource.directories
). Every time a change in the folder happens, it executes a shell command in the docker container that deletes all files in the webresource-temp/
folder. Because every single file change from the frontend build triggers a change, we wait a second before executing the command so we pool all changes within 1 second.
Our CLI for frontend build watching now starts that script in the background (and kills it when the watcher ends). Tada. The code:
#!/bin/bash
###
# This shell script monitors the output directory of the DC frontend build and executes the
# command provided as an argument whenever a new build is emitted.
# This is used by the wfe-dc CLI command to automatically clear the web resource cache
# of the DC server plugin whenever a new frontend build is emitted.
###
# Directory to monitor
MONITOR_DIR="./server/plugin/src/main/resources/assets"
# Command to execute on change
COMMAND="$1"
# Debounce time in seconds
DEBOUNCE_TIME=1
# Function to monitor directory and execute command on change
monitor_directory() {
inotifywait -q -m -r -e modify,create,delete,move "$MONITOR_DIR" |
while read -r directory events filename; do
trigger_command
done
}
# Function to trigger the command with debounce
trigger_command() {
if [[ -n "$TIMER_PID" ]]; then
kill "$TIMER_PID" 2>/dev/null
fi
(sleep "$DEBOUNCE_TIME" && bash -c "$COMMAND") &
TIMER_PID=$!
}
# Run the monitor function indefinitely
while true; do
monitor_directory
done
And inside our cli shell script (just as inspiration, adapt to your environment. We’re big friends of a central cli.sh
for all default tasks.)
if [ "$TODO" = "watch-frontend-dc" ]; then
# Check that inotifywait is installed, if not, show error and exit.
if ! type inotifywait > /dev/null 2>&1 ; then
echo "${C_ERROR}The inotifywait command is not installed. Please install it to use this feature.${C_RESET}";
exit 1
fi
# The command that clears the DC web resource cache. Because - and _ are used randomly by docker, use some filter magic
CMD="docker exec $(docker ps --filter name=dc[-_]confluence[-_]server --quiet) bash -c 'rm /var/atlassian/application-data/confluence/webresource-temp/* 2> /dev/null'"
# Start a watcher in the background to clear the DC cache on changes.
bash ./frontend/build-utils/ExecuteOnEmittedBuildWatcher.sh "$CMD" &
echo "Started watcher to clear DC web resource cache on frontend changes."
# Function to clean up the watcher process and all sub-processes
cleanup() {
echo "Killing watcher processes"
pkill -f "/frontend/build-utils/Execute"
}
# Set a trap to call cleanup on script exit
trap cleanup EXIT
run_in_code_container "cd frontend && npm install && npm run build:server:dev"
fi
Hopefully that helps somebody out. Cheers!